Primary node not being resolved correctly on replica node

At the moment, I have four UCS deployments: one Primary node, two Backup nodes, and one Replica node. The Primary and Backups are located at Site A, the Replica is located at Site B. (Two more Replica nodes will later be deployed at Site C and Site D, but I figure if I can get Site B working correctly, the others will fall into place more easily.)

The Primary node has two nics, one services the office subnet x.x.3.1/24, and the other services the server subnet x.x.8.1/24. One of the Backup nodes is similar, and the other Backup node services only the office subnet. I have Site A and Site B connected via IPsec tunnel, connecting our two server subnets. I have a DNS entry that reflects these two nics for the Primary, but the Replica node at Site B cannot successfully join the domain.

Whenever it fails, it’s trying to resolve the hostname to the office subnet (x.x.3.2), but I need it to resolve to the server subnet (x.x.8.5). I did dig my-primary.example.intranet and it shows both entries, but ping my-primary.example.intranet also fails since it is also trying to resolve to the office subnet.

How can I make the correction so the Replica node at Site B resolves the Primary node at Site A correctly?

which way round do you have the Nics?

it seems like univention has a preference for the first nic,
Also is this two separate cards or is it a sub interface of the same card, then resolved at a switch?

as in
NIC 1
NIC 2
or
NIC1
NIC1_1

I’ve had no end of trouble with “NIC1_1” type formats…

Mastodon