-
Notifications
You must be signed in to change notification settings - Fork 102
Open
Description
Issue description
Environment
- your apisix-java-plugin-runner version
apisix-plugin-runner:0.3.1-SNAPSHOT
apache/apisix:2.15.1-centos
使用的dockerfile为
FROM apache/apisix:2.15.1-centos
#指定镜像创建者信息
MAINTAINER jimyguo
#将宿主机的文件拷贝到容器的具体目录中。这里使用ADD,拷贝后自动解压,如果不需要解压,可以使用COPY
ADD jdk-11.0.16.1_linux-x64_bin.tar.gz /jdk
ADD apache-apisix-java-plugin-runner-0.3.1-SNAPSHOT-bin.tar.gz /opt
ADD apisix-java-plugin-runner-0.3.1-SNAPSHOT-src.tgz /usr/local
#配置jdk环境
ENV JAVA_HOME /jdk/jdk-11.0.16.1
ENV PATH ${JAVA_HOME}/bin:$PATH
Minimal test code / Steps to reproduce the issue
1.在正常打包出来的jar包按照https://github.com/apache/apisix-java-plugin-runner/blob/main/docs/en/latest/how-it-works.md
启动之后以下代码中检出System.getProperty("user.dir")为/usr/local/apisix导致HotReloadProcess.hotReloadFilter()报错
String userDir = System.getProperty("user.dir");
System.err.println(userDir);
//使用了这个代码之后加入apisix-java-plugin-runner-0.3.1-SNAPSHOT-src.tgz到/usr/local就无报错
// userDir=userDir.replace("apisix","apisix-java-plugin-runner");
logger.warn("The filter userDir {} ", userDir);
userDir = userDir.substring(0, userDir.lastIndexOf("apisix-java-plugin-runner") + 25);
String workDir = userDir + loadPath;
2、修改启动成功后,使用的插件对应的配置为
"ext-plugin-pre-req": {
"conf": [
{
"name": "TokenCheckFilter2",
"value": "{\"validate_header\":\"token\",\"rejected_code\":\"403\"}"
}
]
},
启动时以下程序的代码没有打印出对应plugin 的日志,如 logger.debug("get plugins List:{}",pluginFilterList),是否就是没有检测到插件并加以使用
public void start(String path) throws Exception {
EventLoopGroup group = null;
ServerBootstrap bootstrap = new ServerBootstrap();
logger.debug("choose channel");
System.err.println("choose channel");
if (KQueue.isAvailable()) {
group = new KQueueEventLoopGroup();
logger.debug("Using epoll for Netty transport.");
bootstrap.group(group).channel(KQueueServerDomainSocketChannel.class);
} else if (Epoll.isAvailable()) {
group = new EpollEventLoopGroup();
logger.debug("Using kqueue for Netty transport.");
bootstrap.group(group).channel(EpollServerDomainSocketChannel.class);
} else {
String errMsg = "java runner is only support epoll or kqueue";
logger.debug(errMsg);
throw new RuntimeException(errMsg);
}
logger.debug("choose channel success");
System.err.println("choose channel success");
try {
initServerBootstrap(bootstrap);
ChannelFuture future = bootstrap.bind(new DomainSocketAddress(path)).sync();
Runtime.getRuntime().exec("chmod 777 " + socketFile);
logger.warn("java runner is listening on the socket file: {}", socketFile);
future.channel().closeFuture().sync();
} finally {
group.shutdownGracefully().sync();
}
}
private void initServerBootstrap(ServerBootstrap bootstrap) {
logger.debug("start init bootstrap");
System.err.println("start init bootstrap");
bootstrap.childHandler(new ChannelInitializer<DomainSocketChannel>() {
@Override
protected void initChannel(DomainSocketChannel channel) {
channel.pipeline().addFirst("logger", new LoggingHandler())
.addAfter("logger", "payloadEncoder", new PayloadEncoder())
.addAfter("payloadEncoder", "delayedDecoder", new BinaryProtocolDecoder())
.addLast("payloadDecoder", new PayloadDecoder())
.addAfter("payloadDecoder", "prepareConfHandler", createConfigReqHandler(cache, beanProvider))
.addAfter("prepareConfHandler", "hTTPReqCallHandler", createA6HttpHandler(cache))
.addLast("exceptionCaughtHandler", new ExceptionCaughtHandler());
}
});
System.err.println("start bootstrap success");
}
public PrepareConfHandler createConfigReqHandler(Cache<Long, A6Conf> cache, ObjectProvider<PluginFilter> beanProvider) {
List<PluginFilter> pluginFilterList = beanProvider.orderedStream().collect(Collectors.toList());
Map<String, PluginFilter> filterMap = new HashMap<>();
logger.debug("get plugins List:{}",pluginFilterList);
System.err.println("get plugins List:"+pluginFilterList);
for (PluginFilter filter : pluginFilterList) {
System.err.println(filter.name());
filterMap.put(filter.name(), filter);
}
return new PrepareConfHandler(cache, filterMap);
}
日志如下:
2022/12/05 12:58:18 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:18.927 DEBUG 56 --- [ main] o.a.a.p.r.s.ApplicationRunner : start init bootstrap
start init bootstrap, context: ngx.timer
2022/12/05 12:58:18 [warn] 55#55: *218 [lua] init.lua:913: start bootstrap success, context: ngx.timer
2022/12/05 12:58:18 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:18.936 DEBUG 56 --- [ main] i.n.c.DefaultChannelId : -Dio.netty.processId: 56 (auto-detected), context: ngx.timer
2022/12/05 12:58:18 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:18.938 DEBUG 56 --- [ main] i.n.c.DefaultChannelId : -Dio.netty.machineId: 6a:b1:89:ff:fe:19:44:6f (auto-detected), context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.009 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.numHeapArenas: 2, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.009 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.numDirectArenas: 2, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.009 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.pageSize: 8192, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.maxOrder: 9, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.chunkSize: 4194304, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.smallCacheSize: 256, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.normalCacheSize: 64, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.maxCachedBufferCapacity: 32768, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.cacheTrimInterval: 8192, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.cacheTrimIntervalMillis: 0, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.useCacheForAllThreads: false, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.010 DEBUG 56 --- [ main] i.n.b.PooledByteBufAllocator : -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.045 DEBUG 56 --- [ main] i.n.b.ByteBufUtil : -Dio.netty.allocator.type: pooled, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.045 DEBUG 56 --- [ main] i.n.b.ByteBufUtil : -Dio.netty.threadLocalDirectBufferSize: 0, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.046 DEBUG 56 --- [ main] i.n.b.ByteBufUtil : -Dio.netty.maxThreadLocalCharBufferSize: 16384, context: ngx.timer
2022/12/05 12:58:19 [warn] 55#55: *218 [lua] init.lua:913: 2022-12-05 12:58:19.171 WARN 56 --- [ main] o.a.a.p.r.s.ApplicationRunner : java runner is listening on the socket file: /usr/local/apisix/conf/apisix-1.sock, context: ngx.timer
调用了接口之后时应该会使用到TokenCheckFilter2
@Override
public String name() {
return "TokenCheckFilter2";
}
@Override
public void filter(HttpRequest request, HttpResponse response, PluginFilterChain chain) {
log.info("TokenCheckFilter start");
// parse `conf` to json
String configStr = request.getConfig(this);
Gson gson = new Gson();
Map<String, Object> conf = new HashMap<>();
conf = gson.fromJson(configStr, conf.getClass());
// get configuration parameters
String token = request.getHeader((String) conf.get("validate_header"));// token verification results
Map<String, String> headers = request.getHeaders();
Map<String, String> body = new HashMap<>();
body = gson.fromJson(request.getBody(),body.getClass());// token verification results
Map<String, String> params = request.getArgs();
String rejectedCode ="0";
if(checkAppIdToken(body)){
log.info("token success");
response.setBody(JsonResult.success().toString());
}else {
if(checkAppIdToken(params)) {
log.info("token success");
response.setBody(JsonResult.success().toString());
}else {
log.info("token failed");
rejectedCode = (String) conf.get("rejected_code");
response.setStatusCode(Integer.parseInt(rejectedCode));
response.setHeader("x-token","no token");
response.setBody(JsonResult.fail(GlobalResultStatus.AUTH_MISSING).toString());
}
}
chain.filter(request, response);
}
public boolean checkAppIdToken(Map<String, String> map) {
return map.containsKey("appId")&& map.containsKey("accessToken");
}
@Override
public void postFilter(PostRequest request, PostResponse response, PluginFilterChain chain) {
}
@Override
public List<String> requiredVars() {
List<String> vars = new ArrayList<>();
vars.add("appId");
vars.add("accessToken");
return vars;
}
@Override
public Boolean requiredBody() {
return true;
}
在容器调用链接后返回结果为
curl "http://0.0.0.0:9080/v1/admin/routes" -i
HTTP/1.1 200 OK
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Date: Wed, 07 Dec 2022 07:22:19 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 3600
Server: APISIX/2.15.1
{"action":"get","count":2,"node":{"key":"\/apisix\/routes","nodes":[{"key":"\/apisix\/routes\/435797591813260093","modifiedIndex":224,"createdIndex":36,"value":{"update_time":1670210706,"name":"testApi","plugins":{"ext-plugin-pre-req":{"conf":[{"value":"{\"validate_header\":\"token\",\"rejected_code\":\"403\"}","name":"TokenCheckFilter2"}]},"proxy-rewrite":{"uri":"\/clife-user-app-api\/test","headers":{"Host":"clife-user-app-api.clife-public"}},"limit-conn":{"key":"remote_addr","only_use_default_delay":false,"allow_degradation":false,"burst":5,"conn":2,"default_conn_delay":1.5,"key_type":"var","rejected_msg":"conn too more","rejected_code":503,"disable":false}},"id":"435797591813260093","uri":"\/v1\/account\/test","status":1,"methods":["GET","POST","PUT","DELETE","PATCH","HEAD","OPTIONS","CONNECT","TRACE"],"upstream_id":"435797316717249341","labels":{"API_VERSION":"v1","clife-user":"user"},"create_time":1669285206}},{"key":"\/apisix\/routes\/437376598874784573","modifiedIndex":558,"createdIndex":390,"value":{"update_time":1670236425,"name":"test","plugins":{"proxy-rewrite":{"uri":"\/apisix\/admin\/routes","headers":{"X-API-KEY":" edd1c9f034335f136f87ad84b625c8f1"}},"ext-plugin-pre-req":{"conf":[{"value":"{\"validate_header\":\"token\",\"rejected_code\":\"403\"}","name":"TokenCheckFilter2"}]}},"id":"437376598874784573","uri":"\/v1\/admin\/routes","status":1,"methods":["GET","POST","PUT","DELETE","PATCH","HEAD","OPTIONS","CONNECT","TRACE"],"labels":{"API_VERSION":"v1"},"upstream":{"timeout":{"connect":6,"send":6,"read":6},"scheme":"http","keepalive_pool":{"idle_timeout":60,"requests":1000,"size":320},"nodes":{"localhost:9180":1},"pass_host":"pass","type":"roundrobin"},"create_time":1670226368}}],"dir":true}}
对应的日志为
127.0.0.1 - - [07/Dec/2022:07:22:19 +0000] 0.0.0.0:9080 "GET /v1/admin/routes HTTP/1.1" 200 1748 0.121 "-" "curl/7.29.0" 127.0.0.1:9180 200 0.121 "http://0.0.0.0:9080/apisix/admin/routes"
127.0.0.1 - - [07/Dec/2022:07:22:19 +0000] 0.0.0.0:9080 "GET /apisix/admin/routes HTTP/1.1" 200 1748 0.018 "-" "curl/7.29.0" - - - "http://0.0.0.0:9080"
能够直接访问且没有出现对应插件中的日志打印,故判断插件确实没有生效
What's the actual result? (including assertion message & call stack if applicable)
请求没有通过插件的检测返回配置的数据,能够访问成功
What's the expected result?
当请求没有待遇偶需要的appId和accessToken会返回接口配置的rejected_code:403.
Metadata
Metadata
Assignees
Labels
No labels