Let me start with a short explanation. I have two UCS-5.0-6 running as domain controllers, both hosted in proxmox. They were installed with one ethernet trunk port, which was split into VLAN ports. So in UCS system → network I had ens19.10, ens19.11, ens19.12 etc. All with their IP addressing and all that. Domain was working as intended.
On upgrade of proxmox to version 8, I decided to go with now well visible Software Defined Network, which does make management much easier in multi VLAN environment.
I did some testing and all seamed right. Servers talked to each other, no issues with traversing vlans, so I made a move. The network configuration looks like if I had separate ethernet ports attached directly (with VLANs controlled by proxmox).
vlan | old port | new port |
---|---|---|
10 | ens19.10 | ens18 |
11 | ens19.11 | ens20 |
12 | ens19.12 | ens21 |
Primary network interface changed from ens19.10 to ens18.
Now, the problem I’m facing is that domain computers which originally lived in vlan11, can’t connect to anything but port 80/443.
In the UCR I changed security/packetfilter/defaultpolicy and security/packetfilter/disabled but still no luck.
If I scan the server with nmap from one of the workstations, it shows plenty of open ports (ssh, ldap, kerberos, etc) but for no money I can connect.
If I try to reach to SMB shares I can do so over the IP address but not the DNS name of the server. Something like
\192.168.0.1\ in Windows file explorer will work,
\dc01\ won’t
What’s more confusing is that any other server running on VLAN10 can access all of the services. I run out of options where to look. Switches look right, SDN does work, ports are visible and open, there must be something in UCS that stops me from establishing connections from another vlan.
Any ideas?