Reverse HTTP proxy (framework) written in Java, that can be used
- as an API gateway
- as a security proxy
- for HTTP based integration
- as a WebSockets and STOMP router
Download the binary.
Unpack.
Start service-proxy.sh
or service-proxy.bat
.
Have a look at the main configuration file conf/proxies.xml
. Changes to this file are instantly deployed.
Run the samples in the examples folder, follow the REST or SOAP tutorials, see the Documentation or the FAQ.
Routing requests from localhost:80 to localhost:8080 :
<serviceProxy port="80">
<target host="localhost" port="8080" />
</serviceProxy>
Routing only requests with path /foo :
<serviceProxy port="80">
<path>/foo</path>
<target host="localhost" port="8080" />
</serviceProxy>
SOAP proxies configure themselves by analysing WSDL:
<soapProxy wsdl="http://thomas-bayer.com/axis2/services/BLZService?wsdl">
</soapProxy>
Add features like logging or XML Schema validation against a WSDL document:
<soapProxy wsdl="http://thomas-bayer.com/axis2/services/BLZService?wsdl">
<validator />
<log />
</soapProxy>
Limit the number of requests in a given time frame:
<serviceProxy port="80">
<rateLimiter requestLimit="3" requestLimitDuration="PT30S"/>
<target host="localhost" port="8080" />
</serviceProxy>
Rewrite URLs:
<serviceProxy port="2000">
<rewriter>
<map from="^/goodlookingpath/(.*)" to="/backendpath/$1" />
</rewriter>
<target host="my.backend.server" port="80" />
</serviceProxy>
Monitor HTTP traffic:
<serviceProxy port="2000">
<log/>
<target host="localhost" port="8080" />
</serviceProxy>
Dynamically manipulate and monitor messages with Groovy and JavaScript (Nashorn):
<serviceProxy port="2000">
<groovy>
exc.request.header.add("X-Groovy", "Hello from Groovy")
CONTINUE
</groovy>
<target host="localhost" port="8080" />
</serviceProxy>
<serviceProxy port="2000">
<javascript>
exc.getRequest().getHeader().add("X-Javascript", "Hello from JavaScript");
CONTINUE;
</javascript>
<target host="localhost" port="8080" />
</serviceProxy>
Route and intercept WebSocket traffic:
<serviceProxy port="2000">
<webSocket url="http://my.websocket.server:1234">
<wsLog/>
</webSocket>
<target port="8080" host="localhost"/>
</serviceProxy>
(Find an example on membrane-soa.org)
Use the widely adopted OAuth2/OpenID Framework to secure endpoints:
<serviceProxy name="Resource Service" port="2001">
<oauth2Resource>
<membrane src="https://accounts.google.com" clientId="INSERT_CLIENT_ID" clientSecret="INSERT_CLIENT_SECRET" scope="email profile" subject="sub"/>
</oauth2Resource>
<groovy>
def oauth2 = exc.properties.oauth2
exc.request.header.setValue('X-EMAIL',oauth2.userinfo.email)
CONTINUE
</groovy>
<target host="thomas-bayer.com" port="80"/>
</serviceProxy>
(Find an example on membrane-soa.org)
Operate your own OAuth2/OpenID AuthorizationServer/Identity Provider:
<serviceProxy name="Authorization Server" port="2000">
<oauth2authserver location="logindialog" issuer="http://localhost:2000" consentFile="consentFile.json">
<staticUserDataProvider>
<user username="john" password="password" email="john@predic8.de" />
</staticUserDataProvider>
<staticClientList>
<client clientId="abc" clientSecret="def" callbackUrl="http://localhost:2001/oauth2callback" />
</staticClientList>
<bearerToken/>
<claims value="aud email iss sub username">
<scope id="username" claims="username"/>
<scope id="profile" claims="username email password"/>
</claims>
</oauth2authserver>
</serviceProxy>
(Find an example on membrane-soa.org)
Secure an endpoint with basic authentication:
<serviceProxy port="2000">
<basicAuthentication>
<user name="bob" password="secret" />
</basicAuthentication>
<target host="localhost" port="8080" />
</serviceProxy>
Route to SSL/TLS secured endpoints:
<serviceProxy port="8080">
<target host="www.predic8.de" port="443">
<ssl/>
</target>
</serviceProxy>
Secure endpoints with SSL/TLS:
<serviceProxy port="443">
<ssl>
<keystore location="membrane.jks" password="secret" keyPassword="secret" />
<truststore location="membrane.jks" password="secret" />
</ssl>
<target host="localhost" port="8080" />
</serviceProxy>
Limit the number of incoming requests:
<serviceProxy port="2000">
<rateLimiter requestLimit="3" requestLimitDuration="PT30S"/>
<target host="localhost" port="8080" />
</serviceProxy>
Distribute your workload to multiple nodes:
<serviceProxy name="Balancer" port="8080">
<balancer name="balancer">
<clusters>
<cluster name="Default">
<node host="my.backend.service-1" port="4000"/>
<node host="my.backend.service-2" port="4000"/>
<node host="my.backend.service-3" port="4000"/>
</cluster>
</clusters>
</balancer>
</serviceProxy>
See configuration reference for much more.