Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test failure: test_match_large_valueset #6192

Closed
vashirov opened this issue May 31, 2024 · 4 comments · Fixed by #6206
Closed

Test failure: test_match_large_valueset #6192

vashirov opened this issue May 31, 2024 · 4 comments · Fixed by #6206
Labels
needs triage The issue will be triaged during scrum

Comments

@vashirov
Copy link
Member

Issue Description

=================================== FAILURES ===================================
__________________________ test_match_large_valueset ___________________________

topology_st = <lib389.topologies.TopologyMain object at 0x7f5860c94b30>

    def test_match_large_valueset(topology_st):
        """Test that when returning a big number of entries
        and that we need to match the filter from a large valueset
        we get benefit to use the sorted valueset
    
        :id: 7db5aa88-50e0-4c31-85dd-1d2072cb674c
    
        :setup: Standalone instance
    
        :steps:
             1. Create a users and groups backends and tune them
             2. Generate a test ldif (2k users and 1K groups with all users)
             3. Import test ldif file using Offline import (ldif2db).
             4. Prim the 'groups' entrycache with a "fast" search
             5. Search the 'groups' with a difficult matching value
             6. check that etime from step 5 is less than a second
    
        :expectedresults:
             1. Create a users and groups backends should PASS
             2. Generate LDIF should PASS.
             3. Offline import should PASS.
             4. Priming should PASS.
             5. Performance search should PASS.
             6. Etime of performance search should PASS.
        """
    
        log.info('Running test_match_large_valueset...')
        #
/usr/lib/python3.12/site-packages/lib389/__init__.py:177: in inner
    return f(*args, **kwargs)
/usr/lib64/python3.12/site-packages/ldap/ldapobject.py:400: in modify_ext_s
    resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout)
/usr/lib/python3.12/site-packages/lib389/__init__.py:177: in inner
    return f(*args, **kwargs)
/usr/lib64/python3.12/site-packages/ldap/ldapobject.py:543: in result3
    resp_type, resp_data, resp_msgid, decoded_resp_ctrls, retoid, retval = self.result4(
/usr/lib/python3.12/site-packages/lib389/__init__.py:177: in inner
    return f(*args, **kwargs)
/usr/lib64/python3.12/site-packages/ldap/ldapobject.py:553: in result4
    ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)
/usr/lib/python3.12/site-packages/lib389/__init__.py:177: in inner
    return f(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <lib389.DirSrv object at 0x7f5860c97560>
func = <built-in method result4 of LDAP object at 0x7f5860f1fc90>
args = (34, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None

    def _ldap_call(self,func,*args,**kwargs):
      """
      Wrapper method mainly for serializing calls into OpenLDAP libs
      and trace logs
      """
      self._ldap_object_lock.acquire()
      if __debug__:
        if self._trace_level>=1:
          self._trace_file.write('*** {} {} - {}\n{}\n'.format(
            repr(self),
            self._uri,
            '.'.join((self.__class__.__name__,func.__name__)),
            pprint.pformat((args,kwargs))
          ))
          if self._trace_level>=9:
            traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)
      diagnostic_message_success = None
      try:
        try:
>         result = func(*args,**kwargs)
E         ldap.UNWILLING_TO_PERFORM: {'msgtype': 103, 'msgid': 34, 'result': 53, 'desc': 'Server is unwilling to perform', 'ctrls': [], 'info': 'Error: "nsslapd-cachememsize" can not be updated while "nsslapd-cache-autosize" is set in "cn=config,cn=ldbm database,cn=plugins,cn=config".', 'ldap_request': "modify_ext_s(('cn=groups,cn=ldbm database,cn=plugins,cn=config', [(2, 'nsslapd-cachememsize', [b'200000000'])]),{'serverctrls': None, 'clientctrls': None, 'escapehatch': 'i am sure'}) on instance standalone1"}
@vashirov
Copy link
Member Author

Test was introduced in #6172

@vashirov
Copy link
Member Author

vashirov commented Jun 6, 2024

Test fails with BDB only.

ldap_modify: Server is unwilling to perform (53)
        additional info: Error: "nsslapd-cachememsize" can not be updated while "nsslapd-cache-autosize" is set in "cn=config,cn=ldbm database,cn=plugins,cn=config".
Error at: cn=groups,cn=ldbm database,cn=plugins,cn=config

But there is no nsslapd-cache-autosize under cn=config,cn=ldbm database,cn=plugins,cn=config.

@progier389
Copy link
Contributor

progier389 commented Jun 6, 2024

But there is no nsslapd-cache-autosize under cn=config,cn=ldbm database,cn=plugins,cn=config.

I think that it is the problem:
The default value is 25.
So I guess it should explicitly be set to 0 before changing the cache size

@vashirov
Copy link
Member Author

vashirov commented Jun 6, 2024

But there is no nsslapd-cache-autosize under cn=config,cn=ldbm database,cn=plugins,cn=config.

Sorry, I forgot to mention, that the message is misleading. The actual value is under cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config. But yes, changing it to 0 before changing the cache size helps with this particular failure.

But later the test still fails, both with bdb and mdb backends:

        # Step 6. Gather the etime from the access log
        inst.stop()
        access_log = DirsrvAccessLog(inst)
        search_result = access_log.match(".*RESULT err=0 tag=101 nentries=%s.*" % groups_number)
        log.info("Found patterns are %s", search_result[0])
        log.info("Found patterns are %s", search_result[1])
        etime = float(search_result[1].split('etime=')[1])
        log.info("Duration of the search from access log was %f", etime)
        assert len(entries) == groups_number
>       assert (etime < 1)
E       assert 1.794499552 < 1

Perhaps we should adjust the expected etime.

vashirov added a commit to vashirov/389-ds-base that referenced this issue Jun 6, 2024
Description:
When BDB backend is used, nsslapd-cache-autosize needs to be set to 0
first in order to change nsslapd-cachememsize.
Also increase the expected etime slightly, as it fails on slower VMs
both with BDB and MDB backends.

Fixes: 389ds#6192
vashirov added a commit to vashirov/389-ds-base that referenced this issue Jun 8, 2024
Description:
When BDB backend is used, nsslapd-cache-autosize needs to be set to 0
first in order to change nsslapd-cachememsize.
Also increase the expected etime slightly, as it fails on slower VMs
both with BDB and MDB backends.

Fixes: 389ds#6192

Reviewed by: @droideck, @tbordaz (Thanks!)
vashirov added a commit that referenced this issue Jun 8, 2024
Description:
When BDB backend is used, nsslapd-cache-autosize needs to be set to 0
first in order to change nsslapd-cachememsize.
Also increase the expected etime slightly, as it fails on slower VMs
both with BDB and MDB backends.

Fixes: #6192

Reviewed by: @droideck, @tbordaz (Thanks!)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs triage The issue will be triaged during scrum
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants