Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add http endpoint for triggering a forced (out of schedule) sync #482

Open
sed-i opened this issue Jan 26, 2022 · 12 comments
Open

Add http endpoint for triggering a forced (out of schedule) sync #482

sed-i opened this issue Jan 26, 2022 · 12 comments
Assignees

Comments

@sed-i
Copy link
Contributor

sed-i commented Jan 26, 2022

Running git-sync --one-time with the same params as an existing git-sync process may interfere each other if both syncs happen to occur at the same time.
An http endpoint for triggering a sync on demand (temporarily skipping throught the remaining wait-time) could be handy.


Perhaps .git/index.lock guarantees no interference, in which case running a second instance of git-sync with --one-time could be the go-to solution.

@thockin thockin self-assigned this Jan 30, 2022
@thockin
Copy link
Member

thockin commented Jan 30, 2022

Allowing a manual trigger means we have to talk about authorization - who is allowed to trigger that? git-sync is often (usually!) run in a container, so putting such a mechanism on the network without authz is a bad idea.

I don't think git-sync is responsible for synchronizing between instances of itself - why are you doing that?

See #149 and #226

Again, I am not exactly saying no, but I'd need someone to show a clear example of how auth works.

@sed-i
Copy link
Contributor Author

sed-i commented Feb 9, 2022

I don't think git-sync is responsible for synchronizing between instances of itself - why are you doing that?

I'm using git-sync for prometheus alert rules: when those are updated, I would like to be able to trigger an update right away (if indeed urgent) instead of waiting for wait-time to elapse.

The auth argument is solid.

If starting a second git-sync process with git-sync --one-time is safe (relying on index.lock to fail either process), then that would probably work for me.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 9, 2022
@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 18, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 16, 2022
@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2023
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 15, 2023
trulede added a commit to trulede/git-sync that referenced this issue Feb 15, 2023
Operation:
git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal SIGHUP
git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal HUP
git-sync --repo https://github.com/kubernetes/kubernetes --sync-on-signal 1

Signals can be sent to docker containers with cmd:
docker kill -signal SIGHUP <Container ID>

closes kubernetes#660
related kubernetes#226 kubernetes#482
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 15, 2023
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2023
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@the-xentropy
Copy link

This was actually fairly easy. Only took a few hours - and I have it dynamically installing my python requirements too. No more rebuilding images for this guy. (:

  1. Configure the deployment to use a podSecurityContext setting the fsGroup to e.g. 1000. Your cluster might need runAsUser and runAsGroup too, but just set them all to the same value.
  2. Add git-sync as a sidecarContainer, pulling in code to a known location. If you don't do the above you'll get file issues since git-sync will write with unusable permissions.
  3. Add a startupProbe to the git-sync sidecarContainer which checks for the existence of the definitions file (I do stat <package root - I.E. below the first absolute import>/somepackagename/__init__.py)
  4. Set the dagster user-code-example gRPC command line options to:
          dagsterApiGrpcArgs:
            - "--working-directory"
            - "<package root - I.E. below the first absolute import>"
            - "--python-file"
            - "<package root - I.E. below the first absolute import>/somepackagename/__init__.py"

init.py imports the rest using e.g. from somepackagename import <blah>.

For the python requirements, I just modified my python code to run pip install if it detects it's being run under dagster lazily before it needs specific imports and bumped up the gRPC readinessProbe failureThreshold so it has time to install everything. You probably can't use an init container for this unless you want to manually restart your deployments all the time, because git-sync would pull in code which has requirements that aren't installed yet - so I just went the "easy" route by lazily pulling down requirements before I import them.

I'll handle optimization timing on the tail-end by just pruning instances where I can detect I don't have to run pip. I dunno if dagster itself caches imports even across file changes, and if so I may have to add some code to manually delete cached imports from globals in the future, but this works.

For those with more static dependencies or that want reliability instead of dynamic import insanity, it's probably easier to just build a derivative image of the user-code-example and pip install your requirements then. Way less prone to breakage.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 10, 2024
@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants