-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add http endpoint for triggering a forced (out of schedule) sync #482
Comments
Allowing a manual trigger means we have to talk about authorization - who is allowed to trigger that? git-sync is often (usually!) run in a container, so putting such a mechanism on the network without authz is a bad idea. I don't think git-sync is responsible for synchronizing between instances of itself - why are you doing that? Again, I am not exactly saying no, but I'd need someone to show a clear example of how auth works. |
I'm using git-sync for prometheus alert rules: when those are updated, I would like to be able to trigger an update right away (if indeed urgent) instead of waiting for The auth argument is solid. If starting a second git-sync process with |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Operation: git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal SIGHUP git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal HUP git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal 1 Signals can be sent to docker containers with cmd: docker kill -signal SIGHUP <Container ID> closes kubernetes#660 related kubernetes#226 kubernetes#482
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This was actually fairly easy. Only took a few hours - and I have it dynamically installing my python requirements too. No more rebuilding images for this guy. (:
init.py imports the rest using e.g. For the python requirements, I just modified my python code to run pip install if it detects it's being run under dagster lazily before it needs specific imports and bumped up the gRPC I'll handle optimization timing on the tail-end by just pruning instances where I can detect I don't have to run pip. I dunno if dagster itself caches imports even across file changes, and if so I may have to add some code to manually delete cached imports from globals in the future, but this works. For those with more static dependencies or that want reliability instead of dynamic import insanity, it's probably easier to just build a derivative image of the user-code-example and pip install your requirements then. Way less prone to breakage. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Running
git-sync --one-time
with the same params as an existinggit-sync
process may interfere each other if both syncs happen to occur at the same time.An http endpoint for triggering a sync on demand (temporarily skipping throught the remaining
wait-time
) could be handy.Perhaps
.git/index.lock
guarantees no interference, in which case running a second instance ofgit-sync
with--one-time
could be the go-to solution.The text was updated successfully, but these errors were encountered: