-
-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: use watcherx to watch access rule files #1059
Conversation
7f5bc0a
to
8be4be3
Compare
Codecov Report
@@ Coverage Diff @@
## master #1059 +/- ##
==========================================
+ Coverage 77.79% 77.87% +0.08%
==========================================
Files 81 80 -1
Lines 3967 3924 -43
==========================================
- Hits 3086 3056 -30
+ Misses 600 589 -11
+ Partials 281 279 -2
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
8be4be3
to
22b80e6
Compare
22b80e6
to
846a053
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice work, despite the complexity the code is easy to follow :) I couldn't find any logic mistakes or issues, so I think this is 👍
We should still observe the behavior on a test environment (k3d) to ensure we did not introduce regressions somehow.
OK, I verified it works with some manual tests in k3d using this manifest: kind: ConfigMap
apiVersion: v1
metadata:
name: config
data:
config.yml: |-
access_rules:
repositories:
- file:///etc/rules/rules.yml
matching_strategy: regexp
authenticators:
anonymous:
enabled: true
authorizers:
allow:
enabled: true
mutators:
noop:
enabled: true
---
kind: ConfigMap
apiVersion: v1
metadata:
name: rules
data:
rules.yml: |-
- id: test-rule-1-yaml
upstream:
preserve_host: true
strip_path: /api
url: https://mybackend.com/api
match:
url: myproxy.com/api
methods:
- GET
- POST
authenticators:
- handler: anonymous
authorizer:
handler: allow
mutators:
- handler: noop
---
kind: Pod
apiVersion: v1
metadata:
name: ory-oathkeeper
spec:
containers:
- name: oathkeeper
image: oryd/oathkeeper:dev-alpine
args:
- serve
- -c
- /etc/configs/config.yml
ports:
- containerPort: 4455
- containerPort: 4456
env:
- name: config
valueFrom:
configMapKeyRef:
name: config
key: config.yml
- name: rules
valueFrom:
configMapKeyRef:
name: rules
key: rules.yml
volumeMounts:
- name: config
mountPath: /etc/configs
- name: rules
mountPath: /etc/rules
volumes:
- name: config
configMap:
name: config
- name: rules
configMap:
name: rules The commands to run this were: make docker
k3d cluster create --image docker.io/rancher/k3s:v1.21.8-k3s1
k3d image import oryd/oathkeeper:dev-alpine
kubectl apply -f manifests.yml
# do some updates to the config & rules
# apply again First I was confused because the update did not seem to propagate, but turned out it was because of different log levels for config and access rule update logs 😅 |
No description provided.