-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add PVC for agent #163
Add PVC for agent #163
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clean old agents, the following commands can be executed on the database
Maybe it worth to mention query like
delete from agents where last_contact < now() - interval '1 day';
^ not tested.
ping @anbraten |
@xoxys Added yamllint, prettierc, addressed all warnings and your comments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
# -- Defines an existing claim to use | ||
existingClaim: | ||
# -- Defines the size of the persistent volume | ||
size: 1Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this supposed to be 1Gi
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? The volume just holds the agent configuration (text file). 1Mi
should be more than enough as default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's 1Gi in the agent subchart values (https://github.com/woodpecker-ci/helm/pull/163/files#diff-afab925868b45a9b5ea72b370133adf7a55e74652cfd7695bbeca6b7e2c30658R65) and i think 1Mi is below the minimum of some csi provisioners
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok yes, missed that. Good point in that case we should fix it.
To persist
/etc/woodpecker/agent.conf
and solve woodpecker-ci/woodpecker#3023To clean old agents, the following commands can be executed on the database (Postgres example)
This will clean a lot stale agents but not all.
Another way is to clean all agents for which the last contact is older than 1 day (thanks @zc-devs), one can do
Alternatively, one can remove all agents, then recreate the pod and the attached PV. This will reinitialize a fresh agent with a new ID.
I've tested this PR and recreated the agents (two replicas) many times. No new agents were created and
/etc/woodpecker/agent.conf
always showed the persistet agent ID.