-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added cookie forwarding (T88016) #272
Conversation
@@ -168,7 +168,6 @@ describe('404 handling', function() { | |||
}) | |||
.finally(function() { | |||
nock.cleanAll(); | |||
nock.restore(); | |||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently this was too much - we don't need to restore nock after test, because it breaks following nock tests. Cleaning all errors is enough to make everything run properly both if test pass and fail.
Permissions are collected along the path and returned. This is needed for PR: wikimedia/restbase#272 Bug: https://phabricator.wikimedia.org/T88016
Please note that travis build will fail until swagger-router is updated, so this change depend on PR: |
@@ -268,7 +279,7 @@ RESTBase.prototype._request = function (req) { | |||
childReq.params = match.params; | |||
|
|||
// Call the handler with P.try to catch synchronous exceptions. | |||
return P.try(handler, [childRESTBase, childReq]) | |||
var request = P.try(handler, [childRESTBase, childReq]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request
is confusing as a name since we usually use req
/ request
to denote the actual request object, but here it's a Promise. Maybe reqHandlerPromise
or reqHandling
would be more appropriate.
} else if (res.body.query.userinfo) { | ||
return res.body.query.userinfo; | ||
} else { | ||
throw apiError({info: 'Unable to parse PHP action API response.'}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, not sure about this. Throwing a cannot parse response
because we don't know the meta in question seems wrong given that the response might be a legitimate one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@d00rman This is exactly how this used to work, but previously we "knew" only 'pages' field, and now we know 'pages' and 'userinfo'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's not really a change from the old behavior. We can generalize this further as needed.
@@ -219,6 +219,13 @@ Router.prototype._handleSwaggerPathSpec = function(node, pathspec, | |||
}); | |||
} | |||
|
|||
var security = pathspec.security; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can actually handle this synchronously, before the loaderPromise is even created. If your concern is about modules adding security restrictions, then I think we can add logic for that above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gwicke but we know the module's security restrictions only after we load it. And may be it loads submodules etc. Also like this we preserve the order, so more top-level restrictions come first and checked first (doesn't make any difference now, but could if we have special restrictions that would need separate requests)
/cc @Stype |
@gwicke yes, there's some stuff I wanted to change, will work on this tomorrow most likely |
@Pchelolo, okay. |
@@ -168,6 +172,10 @@ RESTBase.prototype.defaultWebRequestHandler = function(req) { | |||
// Enforce the usage of UA | |||
req.headers = req.headers || {}; | |||
req.headers['User-Agent'] = req.headers['User-Agent'] || this.rb_config.user_agent; | |||
if (this._accessRestrictions.length > 0 | |||
&& req.headers && req.headers['x-internal-request']) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This header can be injected in the request. We'd either have to set it to false unconditionally when receiving a request, or use some other mechanism to distinguish internal from external services.
I'm wondering if a whitelist regexp to identify internal services would be cleaner than the x-internal-request header. For WMF-internal services, this could be something like /^https?:\/\/[^/]+\.wmnet(:[0-9]+)?\//
.
* the forwarded header. | ||
*/ | ||
rbUtil.copyForwardedCookies = function(restbase, req) { | ||
if (restbase._rootReq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be worth extracting the cookie once, when assigning a new rootreq?
178c2bd
to
4228260
Compare
Updated according to all comments. Also while working on this I found a bug in router https://phabricator.wikimedia.org/T109702 that was related, so fixed it together with this one, as it's dependent. |
var uri = req.uri.toString(); | ||
return this.rb_config.internal_request_whitelist.some(function(regex) { | ||
if (/^\/.+\/$/.test(regex)) { | ||
return new RegExp(regex.substring(1, regex.length - 1)).test(uri); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be cheaper to build a single regexp once at startup, and then just call that.
Example:
var regexps = ['/foo/', '/bar/'].map(function(s) {
new RegExp(return s.substring(1, regex.length - 1)).toString();
});
var whitelist = new RegExp(regexps.join('|'));
Cookies set on original request are forwarded to parsoid and mediawiki endpoints to let user auth and read content. Two tests added, verifying that cookies are actually forwarded - one for parsoid and one for mediawiki api. This solves only a part of the problem, after this fix user can read content from private wiki, but we still do not have a mechanism to forbid read of content already stored in restbase. Bug: https://phabricator.wikimedia.org/T88016
…nt stored in restbase. Permissions are specified in the spec, and then collected along the path. On request, permission are checked with mediawiki api and 401 is returned in case user lacks any rights. Bug: https://phabricator.wikimedia.org/T88016
cb301fc
to
e44f050
Compare
x-subspecs: | ||
- mediawiki/v1/content | ||
- mediawiki_v1_graphoid | ||
- mediawiki/v1/mobileapps | ||
# - mediawiki/v1/revision-scoring | ||
securityDefinitions: | ||
mediawiki_auth: | ||
description: Checks permissions using MW api |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super minor: This description should ideally be useful to third-party developers. Perhaps something like "MediaWiki cookie authentication", so that they get an idea what kind of cookie they are supposed to pass in here?
Added cookie forwarding (T88016)
Cookies set on original request are forwarded to parsoid and mediawiki endpoints to let user auth and read content.
Two tests added, verifying that cookies are actually forwarded - one for parsoid and one for mediawiki api.
This solves only a part of the problem, after this fix user can read content from private wiki, but we still do not have a mechanism to forbid read of content already stored in restbase.
Added simple authentication module which checks permissions for content stored in restbase. Permissions are specified in the spec, and then collected along the path. On request, permission are checked with mediawiki api and 401 is returned in case user lacks any rights.
Bug: https://phabricator.wikimedia.org/T88016