Cannot do an AD take over

During an AD takeover & replacement
Everything works ok and univention claims it migrated >200 objects correctly.

The annoying thing is that, it has worked for a load of users.

then there are 4 it did not work for

then there are a load MORE users it DID work for.

then some others bringing the total objects to about 94…

now either the migration process works or it does not, why should groups of records fail to migrate
whilst most of the others are ok?

If the records contain bad fields , this should be flagged during the migration process.

using LDAP editor to check these records there IS a “uidNumber” feild & it is populated.

and most users are migrated but in the s4 connector it states this:

01.02.2023 14:21:41.493 LDAP        (PROCESS): sync AD > UCS: Resync rejected dn: 'CN=firstname.secondname,OU=01-Office,DC=org,DC=blown-up,DC=com'
01.02.2023 14:21:41.496 LDAP        (PROCESS): sync AD > UCS: [          user] [       add] 'uid=firstname.secondname,OU=01-Office,dc=org,dc=blown-up,dc=com'
01.02.2023 14:21:41.528 LDAP        (ERROR  ): Unknown Exception during sync_to_ucs
01.02.2023 14:21:41.529 LDAP        (ERROR  ): Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/univention/s4connector/__init__.py", line 1483, in sync_to_ucs
    result = self.add_in_ucs(property_type, object, module, position)
  File "/usr/lib/python3/dist-packages/univention/s4connector/__init__.py", line 1203, in add_in_ucs
    res = ucs_object.create(serverctrls=serverctrls, response=response)
  File "/usr/lib/python3/dist-packages/univention/admin/handlers/__init__.py", line 557, in create
    dn = self._create(response=response, serverctrls=serverctrls)
  File "/usr/lib/python3/dist-packages/univention/admin/handlers/__init__.py", line 1259, in _create
    self._ldap_pre_create()
  File "/usr/lib/python3/dist-packages/univention/admin/handlers/users/user.py", line 1431, in _ldap_pre_create
    univention.admin.allocators.acquireUnique(self.lo, self.position, 'uidNumber', self['uidNumber'], 'uidNumber', scope='base')
  File "/usr/lib/python3/dist-packages/univention/admin/allocators.py", line 214, in acquireUnique
    univention.admin.locking.lock(lo, position, type, value.encode('utf-8'), scope=scope)
  File "/usr/lib/python3/dist-packages/univention/admin/locking.py", line 124, in lock
    raise univention.admin.uexceptions.noLock(_('The attribute %r could not get locked.') % (type,))
univention.admin.uexceptions.noLock: Could not acquire lock: The attribute 'uidNumber' could not get locked.

ok tracked this down…
it seems to be a race condition in the take over code…

specifically related to this :
password_sync_s4_to_ucs: Mismatch between UCS pwhistoryPolicy (3) and S4 pwhistoryPolicy (5). Using the larger one.

after checking these records were non existant in the UCS database
i modified the “locking” code in :
/usr/lib/python3/dist-packages/univention/admin/allocators.py

every one of the bad records failed the pw match AD->ucs, once the S4 connector blew thru all the pw errors
and the code mod ignored the lock
every one of thee records was correctly merged into the database and:

univention-s4connector-list-rejected
UCS rejected
S4 rejected

There may be no rejected DNs if the connector is in progress, to be
sure stop the connector before running this script.
last synced USN: 6302

and it all synced up with no errors…

so. i think maybe during the takeover there is a discrepancy in the pw setup, which causes the sync to throw errors , which then exit without the record lock being cleared, so that when it tried to add the users,
it cannot because the field is locked…
then on the next try it happens again… (pw length errs etc)

now since this is only a test AD, no users are logged in and nothing is changing in these records, so they should not be locking…

so basically for all 112 users that could nto add the first time:

  1. modify code base
  2. flusy py buf
    3.restart
  3. after that the lock for the 112 records is ignored & … they all flush as below"

06.02.2023 16:34:11.589 LDAP (WARNING): password_sync_s4_to_ucs: Mismatch between UCS pwhistoryPolicy (3) and S4 pwhistoryPolicy (5). Using the larger one.
06.02.2023 16:34:11.648 LDAP (PROCESS): sync AD > UCS: Resync rejected dn: ‘CN=first.last,OU=Resigned-Staff,DC=org,DC=blown-up,DC=com’
06.02.2023 16:34:11.653 LDAP (PROCESS): sync AD > UCS: [ user] [ add] ‘uid=first.last,OU=Resigned-Staff,dc=org,dc=blown-up,dc=com’
06.02.2023 16:34:12.589 LDAP (WARNING): password_sync_s4_to_ucs: Mismatch between UCS pwhistoryPolicy (3) and S4 pwhistoryPolicy (5). Using the larger one.

06.02.2023 16:34:39.239 LDAP (PROCESS): sync UCS > AD: [ group] [ modify] ‘cn=resigned-staff,cn=groups,DC=org,DC=blown-up,DC=com’
06.02.2023 16:34:39.285 LDAP (PROCESS): sync UCS > AD: [ user] [ add] ‘cn=first.last,ou=resigned-staff,DC=org,DC=blown-up,DC=com’

06.02.2023 16:34:51.862 LDAP (PROCESS): sync AD > UCS: [ user] [ modify] ‘uid=first.last,ou=resigned-staff,dc=org,dc=blown-up,dc=com’

  1. and modify the code base back, manipulate any of these records & now it all syncs up correctly both ways.
Mastodon