UVMM verliert Instanzen (werden nicht angezeigt nach Start)

Guten Tag,

ich habe mehrere XEN Instanzen unter der UVMM installiert. Direkt nach dem Neu-Anlegen war dies auch kein Problem, nur nachdem ich diese Instanzen ein paar Tage (Uptime des Servers bis dahin 5 Tage) ausgeschaltet hatte wollte ich diese nun erneut starten über die UVMM.

Ich konnte die einzelnen Instanzen in der Webmaske sehen und dort auch auf “starten” drücken. Danach verschwanden die Instanzen aus der Liste und wurden auch nach einem Neustart nicht wieder sichtbar.

Installiert ist UCS 2.4 mit aktuellen Sicherheitsupdates
Ausgabe Univention-updater-net:
Checking network repository
System is up to date (UCS 2.4-1)

Installierter Kernel:
Linux sv001 2.6.32-ucs21-xen-amd64 #1 SMP Thu Nov 25 04:16:00 UTC 2010 x86_64 GNU/Linux

Folgende Fehlermeldungen stehen im Syslog beim starten einer Instanz, die nachher nicht mehr angezeigt wird:

Jan 18 08:48:20 sv001 kernel: [ 1225.090015] alloc irq_desc for 2241 on node 0
Jan 18 08:48:20 sv001 kernel: [ 1225.090018] alloc kstat_irqs on node 0
Jan 18 08:48:20 sv001 logger: /etc/xen/scripts/block: add XENBUS_PATH=backend/vbd/2/832
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/vif-bridge: online XENBUS_PATH=backend/vif/2/0
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/2/832/node /dev/loop1 to xenstore.
Jan 18 08:48:21 sv001 kernel: [ 1225.268611] device vif2.0 entered promiscuous mode
Jan 18 08:48:21 sv001 kernel: [ 1225.270727] eth0: port 3(vif2.0) entering forwarding state
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/2/832/physical-device 7:1 to xenstore.
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/2/832/hotplug-status connected to xenstore.
Jan 18 08:48:21 sv001 kernel: [ 1225.296247] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/vif-bridge: iptables setup failed. This may affect guest networking.
Jan 18 08:48:21 sv001 kernel: [ 1225.300191] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:48:21 sv001 kernel: [ 1225.300195] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif2.0, bridge eth0.
Jan 18 08:48:21 sv001 logger: /etc/xen/scripts/vif-bridge: Writing backend/vif/2/0/hotplug-status connected to xenstore.
Jan 18 08:48:21 sv001 kernel: [ 1225.385710] alloc irq_desc for 2240 on node 0
Jan 18 08:48:21 sv001 kernel: [ 1225.385716] alloc kstat_irqs on node 0
Jan 18 08:48:21 sv001 kernel: [ 1225.601741] alloc irq_desc for 2239 on node 0
Jan 18 08:48:21 sv001 kernel: [ 1225.601747] alloc kstat_irqs on node 0
Jan 18 08:48:21 sv001 kernel: [ 1225.632384] alloc irq_desc for 2238 on node 0
Jan 18 08:48:21 sv001 kernel: [ 1225.632387] alloc kstat_irqs on node 0
Jan 18 08:48:22 sv001 kernel: [ 1227.005253] blkback: ring-ref 8, event-channel 10, protocol 1 (x86_64-abi)
Jan 18 08:48:22 sv001 kernel: [ 1227.005296] alloc irq_desc for 2237 on node 0
Jan 18 08:48:22 sv001 kernel: [ 1227.005298] alloc kstat_irqs on node 0
Jan 18 08:48:27 sv001 kernel: [ 1231.296109] alloc irq_desc for 2236 on node 0
Jan 18 08:48:27 sv001 kernel: [ 1231.296112] alloc kstat_irqs on node 0
Jan 18 08:48:31 sv001 kernel: [ 1235.928923] vif2.0: no IPv6 routers present
Jan 18 08:49:27 sv001 kernel: [ 1291.022217] alloc irq_desc for 2235 on node 0
Jan 18 08:49:27 sv001 kernel: [ 1291.022223] alloc kstat_irqs on node 0
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/block: add XENBUS_PATH=backend/vbd/3/832
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/vif-bridge: online XENBUS_PATH=backend/vif/3/0
Jan 18 08:49:28 sv001 kernel: [ 1292.611979] alloc irq_desc for 2234 on node 0
Jan 18 08:49:28 sv001 kernel: [ 1292.611984] alloc kstat_irqs on node 0
Jan 18 08:49:28 sv001 kernel: [ 1292.997176] device vif3.0 entered promiscuous mode
Jan 18 08:49:28 sv001 kernel: [ 1293.001649] eth0: port 4(vif3.0) entering forwarding state
Jan 18 08:49:28 sv001 kernel: [ 1293.124631] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:49:28 sv001 kernel: [ 1293.124639] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/3/832/node /dev/loop2 to xenstore.
Jan 18 08:49:28 sv001 kernel: [ 1293.129920] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:49:28 sv001 kernel: [ 1293.129928] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:49:28 sv001 kernel: [ 1293.129934] physdev match: using --physdev-out in the OUTPUT, FORWARD and POSTROUTING chains for non-bridged traffic is not supported anymore.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/vif-bridge: iptables setup failed. This may affect guest networking.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/3/832/physical-device 7:2 to xenstore.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/vif-bridge: Successful vif-bridge online for vif3.0, bridge eth0.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/vif-bridge: Writing backend/vif/3/0/hotplug-status connected to xenstore.
Jan 18 08:49:28 sv001 logger: /etc/xen/scripts/block: Writing backend/vbd/3/832/hotplug-status connected to xenstore.
Jan 18 08:49:29 sv001 kernel: [ 1293.374839] alloc irq_desc for 2233 on node 0
Jan 18 08:49:29 sv001 kernel: [ 1293.374846] alloc kstat_irqs on node 0
Jan 18 08:49:29 sv001 kernel: [ 1293.399137] alloc irq_desc for 2232 on node 0
Jan 18 08:49:29 sv001 kernel: [ 1293.399140] alloc kstat_irqs on node 0
Jan 18 08:49:30 sv001 kernel: [ 1294.765837] blkback: ring-ref 8, event-channel 10, protocol 1 (x86_64-abi)
Jan 18 08:49:30 sv001 kernel: [ 1294.765879] alloc irq_desc for 2231 on node 0
Jan 18 08:49:30 sv001 kernel: [ 1294.765881] alloc kstat_irqs on node 0
Jan 18 08:49:35 sv001 kernel: [ 1299.539580] alloc irq_desc for 2230 on node 0
Jan 18 08:49:35 sv001 kernel: [ 1299.539584] alloc kstat_irqs on node 0
Jan 18 08:49:39 sv001 kernel: [ 1303.747626] vif3.0: no IPv6 routers present
Jan 18 08:49:41 sv001 nagios2: HOST ALERT: sv003-benno.grabbe-it.de;UP;HARD;1;PING OK - Packet loss = 0%, RTA = 0.24 ms
Jan 18 08:49:41 sv001 nagios2: HOST NOTIFICATION: root@localhost;sv003-benno.grabbe-it.de;UP;host-notify-by-email;PING OK - Packet loss = 0%, RTA = 0.24 ms
Jan 18 08:49:43 sv001 nagios2: SERVICE ALERT: sv003-benno.grabbe-it.de;UNIVENTION_DNS;OK;HARD;1;DNS OK: 0.129 seconds response time. univention.de returns 78.46.6.143
Jan 18 08:49:56 sv001 kernel: [ 1320.645639] lo: Disabled Privacy Extensions
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “qemu://sv001.grabbe-it.de/system” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “xen://sv001.grabbe-it.de/” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “xen://sv001.grabbe-it.de/” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “qemu://sv001.grabbe-it.de/system” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: uri != string: None
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “xen://sv001.grabbe-it.de/” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “qemu://sv001.grabbe-it.de/system” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “xen://sv001.grabbe-it.de/” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: Hypervisor “qemu://sv001.grabbe-it.de/system” is unavailable.
Jan 18 08:50:00 sv001 python2.4: UVMM: request failed: uri != string: None
Jan 18 08:50:00 sv001 python2.4: Die Ausführung des Kommandos ‘uvmm/domain/overview’ ist fehlgeschlagen: Traceback (most recent call last): File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/init.py”, line 160, in execute func( object ) File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/uvmm/init.py”, line 850, in uvmm_domain_overview node_info, domain_info = self.uvmm.get_domain_info_ext( node_uri, object.options[ ‘domain’ ] ) File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/uvmm/uvmmd.py”, line 217, in get_domain_info_ext for domain_info in node_info.domains: AttributeError: ‘NoneType’ object has no attribute ‘domains’
Jan 18 08:50:06 sv001 /USR/SBIN/CRON[12720]: (root) CMD ( if [ -x /usr/sbin/univention-umount-homedirs ]; then /usr/sbin/univention-umount-homedirs; fi)
Jan 18 08:50:06 sv001 /USR/SBIN/CRON[12721]: (root) CMD (if [ -x /usr/bin/mrtg ] && [ -r /etc/mrtg.cfg ]; then env LANG=C /usr/bin/mrtg /etc/mrtg.cfg 2>&1 | tee -a /var/log/mrtg/mrtg.log ; fi)
Jan 18 08:50:06 sv001 /USR/SBIN/CRON[12722]: (root) CMD (/usr/sbin/univention-share-replication)

Nach einem abmelden aus der UCS Oberfläche, warten von 10 Minuten, erscheint die Instanz wieder in der UVMM.

Ich bekomme folgenden Fehler im Webbrowser angezeigt:
Traceback (most recent call last):
File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/init.py”, line 160, in execute
func( object )
File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/uvmm/init.py”, line 850, in uvmm_domain_overview
node_info, domain_info = self.uvmm.get_domain_info_ext( node_uri, object.options[ ‘domain’ ] )
File “/usr/lib/python2.4/site-packages/univention/management/console/handlers/uvmm/uvmmd.py”, line 217, in get_domain_info_ext
for domain_info in node_info.domains:
AttributeError: ‘NoneType’ object has no attribute ‘domains’

Hallo,

ich vermute es handelt sich hierbei um das in diesem Eintrag in unserem Bug Tracking System beschriebene Verhalten.

Sollte das Problem erneut auftreten, so sollten Sie auf dem Virtualisierungsserver den Dienst “univention-virtual-machine- manager-node-common” neu starten:

/etc/init.d/univention-virtual-machine-manager-node-common restart

Wir konnten das Problem häufig unter hoher Last beobachten und empfehlen bei der Verwendung von Xen als Hypervisor die UCR Variable “grub/xenhopt” zu setzen, damit dem Hypervisor genug Ressourcen zur Verfügung stehen, beispielsweise:

 ucr set grub/xenhopt="dom0_mem=1024M dom0_max_vcpus=1 dom0_vcpus_pin"

Die tatsächlichen Werte können dabei aber stark variieren, je nachdem welche Dienste auf dem System laufen und mit welcher Hardware das System ausgestattet ist.

Mit freundlichen Grüssen
Tobias Scherer

Mastodon