-
-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrent token refresh causes session to drop authentication. #334
Comments
I'm experiencing this problem as well, and agree with your explanation. I've implemented a rough workaround to allow only one URL to be in flight at any given time that has the potential to redirect. This is implemented with a simple flag in a shared dictionary that is |
I actually don't think it is feasible to use locking to solve this. If multiple redirects are being queued up, you can't guarantee that the first one will actually be returned. So you have to send them all off and handle the fact they might come out of order. It's going to take some thought, I think, to figure out how to best do that. |
Actually, the refresh token reauths can be locked, it's the web redirects that can't. |
I had to extend my workaround a bit further. I have two different locks—one for the web redirects and one for the silent redirects. The web redirects need the timed expiration for safety, but the silent reauths can just control the |
OK, so I've dialed in a locking method that is reasonably robust, that seeks to make sure there is never more than one It does come at a cost that if multiple requests come in during the authenticate cycle, some will skip the authentication cycle. But in my application, that's not an issue. I did find, however, that when these controls are working successfully, It would be better, I think, if we did not regenerate, and handle the concurrent token refreshes in a different manner. Here's my thought. At least one of the affected lines is this: https://github.com/zmartzone/lua-resty-openidc/blob/master/lib/resty/openidc.lua#L1067 This makes sure that the state recorded in the redirect URI matches the state in the cookie. I understand the CSRF concern makes this matching important. But what I would propose is that every time we generate a new session state, we push the old state onto a list, and allow a match against those as well. We can limit the size of this list to, say, 10, if there is a concern about eventually running out of memory. I think this also means removing states from this list here: https://github.com/zmartzone/lua-resty-openidc/blob/master/lib/resty/openidc.lua#L1122 |
Are there any updates on this? My setup seems to be similar: I am using redis and unauth_action "deny" for the tested location. I experience problems as soon as two ore more parallel requests occur while the access token needs to be refreshed. How do other people deal with that? A workaround could be to strip the session-cookie from unauthorized-requests when using unauth_action "deny". But it feels wrong. Is there a better solution? (edit: this workaround seems to work as intended. "Unauthorized Requests" have to be retried by the app - which is ok in my case) |
@hoebelix what will happen with parallel request with this mentioned bug fix? Won't this result in multiple token refreshs? |
@thorstenfleischmann yes, this results in multiple token refreshs. But at least for my use case with server side sessions stored in redis this does not seem to be a problem. Example: Say I have a session with id 'A' and there are two token refreshs in parallel. After that I have two renewed copies of this session with ids 'B' and 'C'. Only one of them is used by the browser for subsequent requests. The other "unused" session gets garbage collected by redis after some time. |
@hoebelix and what is your timing strategy (ttl) for redis to remove your session. This is my problem, I have the same implementation but I don't want to leave a lot of "garbage" in redis for so long (days). |
Any news about it? In my Single Page application I've three different API calls performed concurrently. So I'm receiving one correct response and two 401 response in my web application also if my token is correctly refreshed. Thanks in advise for any help!! |
See also #190 which I think ultimately may have the same root cause.
lua-resty-openidc
seems to assume that the locking mechanism inlua-resty-session
keeps information stored in the session consistent when processing concurrent requests. It doesn't. I don't know what the design goals were for the locking mechanism implemented inlua-resty-session
, but my conclusion after reviewing it is that either it needs to be fixed inlua-resty-session
, which I believe is not possible without introducing breaking changes, orlua-resty-openidc
should implement its own locking mechanism instead.The effect of this is that concurrent updates to the session state may leave the session in bad shape. As the example below shows it's easy to reproduce deterministically.
Here's what's wrong with the locking mechanism:
The general procedure is
session:start()
session.data
(optional)session:save()
orsession:close()
According to the documentation
session:start()
locks the session, which is true, however what actually happens insidesession:start()
is this:For arguments sake, let's assume we're using
redis
storage engine in which case operations 1, 3, 4 and 5 will make calls toredis
which are asynchronous, which means that they will yield to the current worker thread. With two concurrent requests the sequence of events will beThe problem when request B is allowed to proceed is that it will act on data which was read before it was updated by request A. Before discussing what this should look like, let's complete the sequence:
Here's what this should look like:
Please note that the fix for #190 (PR #209) introduces a session regenerate step when refreshing the access token. While that seemingly fixes #190, it will be a problem here because both requests A and B will use the same session ID (from the cookie), but when request A is processed, the session will be destroyed, a new session ID will be generated and the cookie updated. When request B gets to the point where the session is to be updated, it is still working with the old session which has already been destroyed when processing request A.
Environment
Expected behaviour
Concurrent requests should result in a single token refresh and session should be in a valid authenticate state when all requests have been processed.
Actual behaviour
Multiple token refresh calls happen and session is left in a non-authenticated state when all requests have been processed.
Minimized example
The problem is exposed by the OpenResty configuration below. It uses a local KeyCloak instance set up to match the OpenResty configuration. For the sake of testing, set the access token lifefspan as short as possible (1 minute). Allow refresh tokens to be re-used several times.
With session storage set to use
shm
orredis
storage adapter the problem is more severe than withcookie
storage as explained below.To run the sample do the following:
lua-resty-openidc
but contains four<iframe>
elements with protected content.What you will see is that the first
<iframe>
shows the protected content as expected, but the others get a401
response. From this point the login session has been lost until you go back to the login page http://localhost:8880/login.If
lua-resty-session
is configured to usecookie
storage there is no problem, unless KeyCloak is configured to allow a refresh token to be used only once. In this case the procedure above will show the same behavior, except that another browser refresh (withcookie
storage that is) will bring the page back to fully operational (until next time the access token expires) without visiting the login page.Configuration and NGINX server log files
This set-up mimics a reverse-proxying gateway responsible for authentication and authorization on port 8880 which is backed by a resource server on port 8888. The example uses a single OpenResty instance, but the resource server on port 8888 could also be a separate OpenResty instance, or something else.
nginx.conf (secrets garbled):
The text was updated successfully, but these errors were encountered: