Note: Cool Solutions are articles documenting additional functionality based on Univention products. Not all of the shown steps in the article are covered by Univention Support. For questions about your support coverage contact your contact person at Univention before you want to implement one of the shown steps.
The following article describes how to migrate a standard installation of UCS into a UVMM instance. The values and device names must be adapted to each individual setup.
If the server has to be accessed during the migration process, special care has to be taken of application and user data. This, however, would go beyond the scope of this article. If desired, Univention offers advice for such a scenario.
If the DC Master does not hold any data or provides services that need to be migrated, the installation of a DC Backup and promotion to a DC Master is preferred to migrating the DC Master. The promotion is done with the following command on the DC Backup:
If this is not possible and more servers in addition to the DC Master need to be migrated, the DC Master must be saved and reinstated as first system. If desired, Univention offers advice for such a scenario.
In addition to the UCS system that will be migrated, and a working installation of UVMM, a Linux Live-CD with LVM (i.e. SystemRescueCd) support is needed.
You can now continue by either copying the whole harddrive (including empty space) or by only copying the actual files on your hard drive to a new UVMM instance.
To transfer a native UCS installation into a UVMM instance, the harddrive has to be saved as an image file, which is then used by UVMM.
Hint: If the native installation used more than one harddrive the following procedure must be repeated for every harddrive!
To create the image file, the system must be booted with the Live-CD. The installed harddrives can be identified by using the following command:
It is recommended to use an external harddrive or a USB flash drive to transfer the system:
dd if=/dev/sda of=/mnt/usb/ucs.raw
Depending on the size of the device sda this will take some time.
You can also transfer the harddrive by combining the dd command with a scp command. This omits the need of external storage, but might be more time consuming due to possible CPU and network limits. Consult the following article, if you need help: Transfer image compressed via SSH
When the process is finished for all drives you want to copy, the system can be shut down.
The image is saved as raw file and can now be used as a virtual harddrive with UVMM. For this, the raw image is copied to /var/lib/libvirt/images on the UVMM host.
If KVM is used for virtualization and advanced features should be used, then the image must be converted to a qcow2 file:
cd /var/lib/libvirt/images/ qemu-img convert -f raw -O qcow2 ucs.raw ucs.qcow2
Afterwards, a new virtual instance must be created, using the profile which describes the installation best, e.g. UCS 4.1 (64 Bit). The raw or qcow2 file must be assigned as harddrive. After starting the virtual instance, the system boots up.
To transfer a native UCS installation into a UVMM instance, its harddrive is saved to a file and then copied into an existing, already virtualized, UCS installation.
Attention: If the system, that is about to be migrated, contains databases or user data provided on network shares, ensure that the databases are shut down properly and no users are connected to the server to prevent data loss!
Using the following script, all files and ACLs are saved on the server:
#!/bin/sh export BACKUPPATH=<my-Backup-Path> mkdir -p $BACKUPPATH ionice -c 3 nice -n 20 tar cvfj $BACKUPPATH/files.tar.bz2 --numeric-owner --atime-preserve -X <exclude> --exclude=$BACKUPPATH / getfacl --skip-base -RP / > $BACKUPPATH/acl
Please assure that the directory my-Backup-Path has enough free space to hold the backup. Furthermore, the file exclude must be created, containing directories that are not to be saved with the backup. The following directories must be excluded via the file:
/dev /dev/shm /dev/pts /lib/init/rw /media /mnt /proc /sys /var/lib/nfs/rpc_pipefs
It is recommended to exclude the following directories as well:
/tmp /var/backups /var/cache/apt/archives /var/lib/univention-repository /var/lib/univention-ldap/replog /var/tmp /var/univention-backup
Attention: If the backup is saved directly to a mounted drive, the mountpoint must be excluded as well, else the files on the external drive will be saved as well into the backup!
The directory /var/log should be saved, because services might deny being started when no log files are found. It is therefore recommended to exclude all gzipped (old) files and include only the latest log files.
The script must be made executable prior executing it as root user:
chmod +x script
Before shutting down the old system write down the partitioning using gdisk for a GPT table partitioned system, or fdisk for a MBR table partitioned system. You can also write down the informations displayed with the following commands, if you want to be safe:
First, create a new VM with the profile suiting your native installation, e.g. UCS 4.1 (64 Bit); if unsure use Other (64 Bit). The harddrive must at least match the source system’s size and quantity. Furthermore, a second harddrive must be created, at least the size of the backup file and the Live-CD must be mounted in the virtual CD-ROM drive.
During the boot up of the Live-CD the old server’s architecture should be loaded.
After the system is booted up, make sure the system has an IP address.
If the old installation used a GPT table, use gdisk, else use fdisk. Run the appropriate tool to partition the harddrives:
Create an exact partitioning like your old system. The command sequence in gdisk is similar to the one from fdisk:
n -> p -> 1 -> (empty) -> 999423 a -> 1 n -> e -> 2 -> (empty) -> (empty) n -> (empty) -> (empty) t -> 5 -> 8e w
Now the Logical Volume Groups must be created on the harddrive. In an UCS standard installation, one VG exists with one volume. This can be recreated with the folling command:
pvcreate /dev/vda5 vgcreate vg_ucs /dev/vda5 lvcreate -L 2G -n swap_1 vg_ucs lvcreate -l 100%FREE -n root vg_ucs
Now the /boot and LVM partition can be formatted, and the swap partion can be created:
mkfs.ext2 /dev/vda1 mkswap /dev/mapper/vg_ucs-swap_1 mkfs.ext4 /dev/mapper/vg_ucs-root
Attention: Make sure to format the partitions as they are in the native installation to prevent errors!
If you copy the backup file over the network, a second harddrive must be formatted:
fdisk /dev/vdb n -> p -> 1 -> (empty) -> (empty) w mkfs.ext4 /dev/vdb1
Now the the harddisks are ready to be mounted to the following locations:
mount /dev/mapper/vg_ucs-root /mnt/custom -o acl mkdir /mnt/custom/boot mount /dev/vda1 /mnt/custom/boot -o acl mount /dev/vdb1 /mnt/backup
When the backup files are to be moved over the network, use the following commands:
Note: Make sure, that you configured your network interface
scp root@<my-backup-host>:<my-backup-path>/files.tar.bz2 /mnt/backup scp root@<my-backup-host>:<my-backup-path>/acl /mnt/backup
When the files are saved on an external drive, mount the drive and copy the files into the folder /mnt/backup.
The files can now be extracted from the archive:
cd /mnt/custom tar xvjp --atime-preserve --numeric-owner -f /mnt/backup/files.tar.bz2 -C ./
All excluded directories must be created from the exclude-file must be created. Additionaly, the folders /tmp and /var/tmp must be made writeable for everyone:
chmod 777 tmp chmod 777 /var/tmp
Finally the ACLs must be copied to the new environment so they can be restored later:
cp /mnt/backup/acl /mnt/custom/tmp/acl
After the folder structure is created, the system directories must be mounted into the new environment:
mount -o bind /proc /mnt/custom/proc mount -o bind /dev /mnt/custom/dev mount -o bind /sys /mnt/custom/sys
Now the new environment can be accessed by using chroot:
chroot /mnt/custom /bin/bash
The ACLs must now be restored:
Now the Grub bootloader must be initialized and the config updated:
grub-install /dev/vda update-grub
Now you should update the /etc/fstab file.
You will have to update the UUID of your /boot partition (/dev/vda1). You can find the new UUID with the following command:
The root and swap lines should already be correct. The three lines should look like this now:
/dev/mapper/vg_ucs-root / ext4 errors=remount-ro 0 1 UUID=<new uuid> /boot ext2 defaults 0 2 /dev/mapper/vg_ucs-swap_1 none swap sw 0 0
Now exit the chroot environment and reboot the system. The migrated system should now be ready to be used.