Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autofs: return a connection failure until maps have been fetched #3413

Closed
sssd-bot opened this issue May 2, 2020 · 2 comments
Closed

autofs: return a connection failure until maps have been fetched #3413

sssd-bot opened this issue May 2, 2020 · 2 comments
Assignees
Labels
Bugzilla Closed: Fixed Issue was closed as fixed.

Comments

@sssd-bot
Copy link

sssd-bot commented May 2, 2020

Cloned from Pagure issue: https://pagure.io/SSSD/sssd/issue/2371


Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 7): Bug 1113639

Description of problem:
Please see the discussion in
https://bugzilla.redhat.com/show_bug.cgi?id=1101782

Ian came up with a patch
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. remove sssd caches (simulating a fresh system)
2. reboot
3. attempt to mount an autofs map w/o restarting either autofs or sssd

Actual results:
no maps were cached

Expected results:
autofs would retry, sssd would return the maps, shares would be mounted

Additional info:
SSSD autofs client should return a connection error in case no master map can
be fetched from LDAP due to back end problems (as opposed to the map being
simply absent from LDAP)

Comments


Comment from dpal at 2014-07-03 16:04:34

Fields changed

blockedby: =>
blocking: =>
changelog: =>
coverity: =>
design: =>
design_review: => 0
feature_milestone: =>
fedora_test_page: =>
milestone: NEEDS_TRIAGE => SSSD 1.13
review: True => 0
selected: =>
testsupdated: => 0


Comment from jhrozek at 2015-02-12 20:52:50

Not needed for the next release

mark: => 0
milestone: SSSD 1.13 => SSSD 1.14 beta


Comment from jhrozek at 2016-02-16 13:54:14

Fields changed

milestone: SSSD 1.14 beta => SSSD 1.14.0
sensitive: => 0


Comment from mymzbe at 2016-05-26 07:52:18

Fields changed

cc: => muhammad.zali@t-systems.com


Comment from jhrozek at 2016-06-27 10:29:16

Unfortunately the autofs responder/provider work is still not started and we need to release the 1.14 version soon. Therefore, I'm bumping this ticket to the next version.

milestone: SSSD 1.14.0 => SSSD 1.16 beta


Comment from jhrozek at 2017-02-06 15:42:59

Fields changed

milestone: SSSD Future releases (no date set yet) => SSSD 1.15.2


Comment from jhrozek at 2017-02-24 14:34:23

Metadata Update from @jhrozek:

  • Issue set to the milestone: SSSD 1.15.2

Comment from jhrozek at 2017-03-03 14:08:01

Metadata Update from @jhrozek:

  • Custom field design_review reset
  • Custom field mark reset
  • Custom field patch reset
  • Custom field review reset
  • Custom field sensitive reset
  • Custom field testsupdated reset
  • Issue close_status updated to: None
  • Issue set to the milestone: SSSD 1.15.3 (was: SSSD 1.15.2)

Comment from jhrozek at 2017-03-15 11:46:55

Metadata Update from @jhrozek:

  • Custom field design_review reset
  • Custom field mark reset
  • Custom field patch reset
  • Custom field review reset
  • Custom field sensitive reset
  • Custom field testsupdated reset
  • Issue set to the milestone: SSSD 1.15.4 (was: SSSD 1.15.3)

Comment from jhrozek at 2017-08-18 16:58:56

Metadata Update from @jhrozek:

  • Custom field design_review reset (from false)
  • Custom field mark reset (from false)
  • Custom field patch reset (from false)
  • Custom field review reset (from false)
  • Custom field sensitive reset (from false)
  • Custom field testsupdated reset (from false)
  • Issue tagged with: cleanup-one-sixteen

Comment from jhrozek at 2017-08-18 17:40:59

Metadata Update from @jhrozek:

  • Custom field design_review reset (from false)
  • Custom field mark reset (from false)
  • Custom field patch reset (from false)
  • Custom field review reset (from false)
  • Custom field sensitive reset (from false)
  • Custom field testsupdated reset (from false)
  • Issue untagged with: cleanup-one-sixteen
  • Issue tagged with: cleanup-future

Comment from jhrozek at 2017-08-23 17:11:33

Metadata Update from @jhrozek:

  • Custom field design_review reset (from false)
  • Custom field mark reset (from false)
  • Custom field patch reset (from false)
  • Custom field review reset (from false)
  • Custom field sensitive reset (from false)
  • Custom field testsupdated reset (from false)
  • Issue untagged with: cleanup-future
  • Issue set to the milestone: SSSD Future releases (no date set yet) (was: SSSD 1.15.4)

Comment from pbrezina at 2019-12-04 11:49:05

Metadata Update from @pbrezina:

  • Issue assigned to pbrezina

Comment from thalman at 2020-03-13 12:27:50

Metadata Update from @thalman:

  • Custom field design_review reset (from false)
  • Custom field mark reset (from false)
  • Custom field patch reset (from false)
  • Custom field review reset (from false)
  • Custom field sensitive reset (from false)
  • Custom field testsupdated reset (from false)
  • Issue tagged with: bugzilla
pbrezina added a commit to pbrezina/sssd that referenced this issue Oct 1, 2020
pbrezina added a commit to pbrezina/sssd that referenced this issue Oct 1, 2020
So we do not publish internal error code.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Oct 1, 2020
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 4, 2020
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 4, 2020
So we do not publish internal error code.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 4, 2020
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 18, 2020
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 18, 2020
So we do not publish internal error code.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 18, 2020
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 26, 2020
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 26, 2020
So we do not publish internal error code.

Resolves:
SSSD#3413
pbrezina added a commit to pbrezina/sssd that referenced this issue Nov 26, 2020
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413
pbrezina added a commit that referenced this issue Dec 4, 2020
So we do not publish internal error code.

Resolves:
#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
pbrezina added a commit that referenced this issue Dec 4, 2020
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
@pbrezina
Copy link
Member

pbrezina commented Dec 4, 2020

Pushed PR: #5343

  • master
    • 075519b - configure: check for stdatomic.h
    • 8a22d4a - autofs: correlate errors for different protocol versions
    • 34c519a - autofs: disable fast reply
    • 9098108 - autofs: translate ERR_OFFLINE to EHOSTDOWN
    • e50258d - autofs: return ERR_OFFLINE if we fail to get information from backend and cache is empty
    • 3f0ba4c - cache_req: allow cache_req to return ERR_OFFLINE if all dp request failed

@pbrezina pbrezina added the Closed: Fixed Issue was closed as fixed. label Dec 4, 2020
@pbrezina
Copy link
Member

Pushed PR: #5462

  • master
    • 2499bd1 - cache_req: ignore autofs not configured error

elkoniu pushed a commit to elkoniu/sssd that referenced this issue Feb 28, 2021
… and cache is empty

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
elkoniu pushed a commit to elkoniu/sssd that referenced this issue Feb 28, 2021
So we do not publish internal error code.

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
elkoniu pushed a commit to elkoniu/sssd that referenced this issue Feb 28, 2021
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
akuster pushed a commit to akuster/sssd that referenced this issue May 18, 2021
… and cache is empty

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
akuster pushed a commit to akuster/sssd that referenced this issue May 18, 2021
So we do not publish internal error code.

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
akuster pushed a commit to akuster/sssd that referenced this issue May 18, 2021
If the backend is offline when autofs starts and reads auto.master map
we don't want to wait 60 seconds before the offline flag is reset. We
need to allow autofs to retry the call much sooner.

Resolves:
SSSD#3413

Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bugzilla Closed: Fixed Issue was closed as fixed.
Projects
None yet
Development

No branches or pull requests

2 participants