Skip to content

Conversation

@shreddedbacon
Copy link
Member

@shreddedbacon shreddedbacon commented Dec 19, 2023

Checklist

  • Affected Issues have been mentioned in the Closing issues section
  • Documentation has been written/updated
  • PR title is ready for changelog and subsystem label(s) applied

Initial support for idling control from the API.

This can be merged and released before support in the API is added, it will just sit dormant.

@shreddedbacon shreddedbacon force-pushed the idling-support branch 3 times, most recently from f9e6da8 to 4bd3230 Compare July 31, 2025 00:51
@shreddedbacon shreddedbacon force-pushed the idling-support branch 2 times, most recently from a7f31b0 to 3bd59b2 Compare November 20, 2025 22:43
@shreddedbacon shreddedbacon marked this pull request as ready for review November 20, 2025 22:59
Comment on lines 184 to 187
prevReplicas, err := strconv.ParseInt(deployment.Annotations["service.lagoon.sh/replicas"], 10, 32)
if err != nil {
return err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the replicas was set to 0 manually (eg, via kubectl, bypassing this new feature), will this always fail/return an error? Should we allow it to succeed and always default to 1 replica instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, defaulting to 1 should be fine. There are some overrides in the build-deploy-tool that can change the default replicas to 2 if they are enabled, but if someone is scaling services to 0 outside of aergia or other automated systems, going to 1 is an OK compromise. The next time the user would deploy, if any of the things that would result in 2 replicas being set, they would just get reapplied then.

opLog.Info(fmt.Sprintf("deployment %s", deployment.Name))
opLog.Info(fmt.Sprintf(`{"replicas":%d}`, *deployment.Spec.Replicas))
// this would be nice to be a lagoon label :)
if val, ok := deployment.Labels["idling.amazee.io/idled"]; ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this label required? It's not checked here or modified in EnvironmentServiceState. It would be nice if managing services wasn't tied to having aergia installed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than re-creating all the logic for handling idling and unidling in the remote-controller, this does unfortunately mean aergia is a requirement, with a caveat.

Eventually, aergia will use lagoon based labels and be installed as part of lagoon-remote by default, however the ingress listener and automatic idling would be disabled. Effectively it would just handle manual idling and unidling messages from the API.

pID, _ := strconv.Atoi(namespace.Labels["lagoon.sh/projectId"])
projectID := helpers.UintPtr(uint(pID))
idling := Idled{
Idled: idled,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this codepath, is it possible to know if this is auto-idled vs force-idled vs force-scaled and send back something closer to an enum? I think it would be useful if the Lagoon API/UI can say [for example, not suggesting renaming idling in this service]:

This environment is Active | Inactive (will resume automatically) | Disabled (must be manually resumed).

That could be a follow up if we don't commit to a boolean value for now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting. I hadn't thought about that as a function, mostly because people have only requested seeing the idled state.

Wouldn't be much to refactor to send the state if its easy enough to determine from the state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants