Replies: 36 comments 13 replies
-
Hi @splitice, I understand where you're coming from - certainly the current setup for GitHub Actions workflows is optimized for longer-running jobs. But I'm not sure that multiple jobs is more "correct" unless you actually need a separate virtual machine for each step. I'd love to know a bit more about what a well-structured deployment script looks like to you... is this something that you're willing to share? If not, I'd love to get the highlights - what are these jobs doing? Philosophically, why do you think of them as separate jobs instead of separate steps within a job? How are you sharing data between them (or are you not)? Are you using jobs as an isolation environment? If so, could you use containers as that isolation environment or do you require really strict isolation? Is it about the parallelization of jobs? Regarding the costs, it's important to remember that each job runs on a separate virtual machine, and each new virtual machine takes time to set up and tear down. We keep runners in the set-up state so that your jobs can start on them as quickly as possible. We only bill for the time that your job is actually executing on our runners, and the remainder of the time we retain as costs. |
Beta Was this translation helpful? Give feedback.
-
@ethomson I can't share the workflow of the job that inspired this however I can share s screenshot of a similar one. It's a CD job for an In Development application (and it's already at 12 jobs - expected to be over 30 when complete). Could I make these into a single job? Probably but not without sacrificing alot of the provided functionality. For example:
I understand you have to spin up / spin down runners (unfortunately for workloads like this an overkill) but it doesnt change my opinion regarding how billing rounding should be implemented. A change like this would allow people to really put the runner side ("if", matrix builds, etc) to work for lightweight CD jobs rather than encouraging designing to minimising cost (expecially long term as the workflow grows). To directly answer your questions:
Continuous Deployment for staging / production environments. Sometimes to one target, sometimes we do matrix builds on top of these jobs as well (e.g to deploy to multiple datacenters).
Non-linear dependencies, different tooling requirements and (in some cases) sometimes different destination clusters (e.g some services are k8s, some bare metal or not software related).
Not applicable for this example each step is stateless.
Not particularly. Without isolation some of the jobs would be 1-2s faster (as most jobs have a small amount of overhead for initialization of tooling / environment) however that's "fast enough" as is.
Certainly containers would be more than sufficient
Parallel execution when conditions are met is necessary to ensure timely execution of complex jobs. Although these jobs require little CPU (at-least when compared to serious unit tests) they do require authentication and communication with remote services with the latencies inherent with such actions (e.g comparing of service versions or pulling a docker image). In situations like this linear execution of jobs would be significantly slower and less optimal. The example I provided would take ~2:30min vs ~1 minute at it's current size. |
Beta Was this translation helpful? Give feedback.
-
I really appreciate your detailed feedback. This is really useful to understand for our planning. Having a runner for each of your jobs is indeed overkill in this scenario. Right now a job is affined to a runner, and a runner is expensive, which doesn't really fit in well with what you're trying to do here. We've talked a bit about how to support these sorts of lighter-weight jobs, and but we don't have anything to announce yet. |
Beta Was this translation helpful? Give feedback.
-
I am having the same issue. I created a matrix job in a pipeline that packages umbrella helm charts (around 90). Each matrix takes roughly 30 seconds but I'll be changing this to a script because this matrix approach is using too many minutes. |
Beta Was this translation helpful? Give feedback.
-
I might understand that Github actions billable time was rounded to the minute 3 years ago when the service was still young, but 3 years later I just don't understand why Github does not charge a fair price based on real usage. AWS for instance is now charging to the milliseconds for Lambda. I understand it's not the same architecture but paying 10 minutes for 10 jobs running 5 sec seems to me a non-sense. |
Beta Was this translation helpful? Give feedback.
-
I've just experienced a very similar thing — Now moving into 2023 🤦♂️ I run two jobs, one for Javascript Linting, and one for PHP Linting... In total, it performs on average 30 seconds of work between them, but I'm charged 2 minutes every time 👀 It's on the small scale, but I can see this piling up over time. That's a time charge of 4x my actual usage. Seems silly for me to crunch this into a single Linting job, because then I also can't keep track of which job/task is taking longer if runtime was to start to spike and they also have different requirements to run. Below is another part of my workflow, it's a task for compiling assets, ready to cache the data for subsequent jobs that may or may not need to run. The Testing jobs run parallel because, well, they're testing jobs and either may or may not be needed. In this setup I'm charged 3 minutes for less than a minutes work! Excuse me too for trying to be efficient and only having my tests run when they're needed. Instead of this nice workflow, I'll now be forced to always run at least my unit test suite, and design the job to handle caching of assets for any other jobs. At least then I can shave a guaranteed minute off every time. It's just not as clean, I suppose I'll save 7 seconds doing so, but I'm forced to always run my tests even if not needed now. Surely Github should have an efficient way of containerizing for a few seconds and only charging me for actual usage by now? |
Beta Was this translation helpful? Give feedback.
-
Came across this thread while trying to figure out how 11s resulted in a 54x charge of 10 minutes. Not sure if it's related to rounding or if it's a bug that overcharges for reruns or unmet conditions: |
Beta Was this translation helpful? Give feedback.
-
We hit this issue fairly hard, e.g. --> in many circumstances we're getting charged over 60x more than we would be otherwise.
@ethomson in our case we have many jobs as they are generated from a matrix (one for each folder in the repository). |
Beta Was this translation helpful? Give feedback.
-
We hit the same issue. We are evaluating different solutions, and this issue does not help using concurrency. Because instead of just building something, and then running multiple job with the result of the build, we will have to put everything in the same job to reduce our costs, and this will double the time of the whole pipeline (1min vs 2min). In 2023, we can expect to have something at the seconds level, when you see that AWS is doing it even in the milliseconds level. |
Beta Was this translation helpful? Give feedback.
-
How 'un'funny: just splitted up my workflow in a pipeline with sub jobs (github docu also says its a good way to reuse workflows) and now i'm discovering this specific issue. The main reason why i splitted it up was because i have two images which need to be build and successful and then i need to do an e2e test. |
Beta Was this translation helpful? Give feedback.
-
Started on use of matrix stragegy to reuse jobs, but with this in mind, is a NO-GO! |
Beta Was this translation helpful? Give feedback.
-
Yeah this rounding-up billing model is really anti-customer. Do better. |
Beta Was this translation helpful? Give feedback.
-
5 months later and still overbilling |
Beta Was this translation helpful? Give feedback.
-
I set up a simple workflow to check stale issues and PRs and it runs for 4 seconds. I'm billed for the full minute. Should I hide this check in some other workflow instead? It was easy to create the workflow that I wanted that 100% solved my needs but now I'm having to re-think how I organize this because of billing issues that I shouldn't have to think about in such simple scenario, IMHO. |
Beta Was this translation helpful? Give feedback.
-
This is frustrating because by splitting up jobs to be more efficient and take less overall time, I'm actually using more billable minutes? |
Beta Was this translation helpful? Give feedback.
-
Seems like Github will do nothing as long as people swallow the pill and pay. I'm looking into CircleCI and other competition |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Any update from anyone at github? |
Beta Was this translation helpful? Give feedback.
-
I have a dozen jobs that are just over 1 min , and rounding it up to 2mins basically 2x the costs for them.. |
Beta Was this translation helpful? Give feedback.
-
The docs say.
Run time |
Beta Was this translation helpful? Give feedback.
-
Let's also take note that aside from overbilling customers with inaccurate billable minutes, GitHub is also applying minutes multiplier to Windows and MacOS Docker machines. x2 for Windows and x10 for MacOS (seriously?!) It would have been understandable if MacOS' multiplier is just x5, but x10? That's way too aggressive for a service that can't accurately bill its users for the exact time it was used. |
Beta Was this translation helpful? Give feedback.
-
I just came across this, and really surprised this still hasn't been addressed in any way |
Beta Was this translation helpful? Give feedback.
-
Yep, we just got set up using this system and got billed a whole bunch of minutes we didn't use, for processes that took just a few seconds each. Looks like we won't be expanding on this system until this is addressed. |
Beta Was this translation helpful? Give feedback.
-
It was all easy and pleasant to setup GitHub Actions pipeline until I decided to split one job to multiple for targeting the separation of concerns and DRY, until I found out GitHub's method to bill their customers. Most likely will be looking for another CI/CD service. |
Beta Was this translation helpful? Give feedback.
-
I guess nobody here has came across 0 second round up to 1 minute yet, but it's true, it can happen :) |
Beta Was this translation helpful? Give feedback.
-
This really kills the idea of using this as a cheap method to run short scripts often. Too bad, I thought the setup was pretty neat. |
Beta Was this translation helpful? Give feedback.
-
Something is quite off with how Github calculate run time of a job I have this job run: It is basically 1s based on times from this screenshot. But then in summary page I have 12 seconds duration and billable time 1 minute. Also summary page shows 0 seconds for this job in dependency chart. Is there like 11s tax added to each job run? Why three different job duration displayed? Anyone can explain this? |
Beta Was this translation helpful? Give feedback.
-
3 years and still no recourse... I think it might be time to move to AWS Codebuild or GCP Cloudbuild or similar. As much as I love Github Actions and as much as I'm a big proponent of using them, this billing nonsense is hard to justify. |
Beta Was this translation helpful? Give feedback.
-
It's insanity that this is still unchanged after all these years. I just split my workflow up to run some stuff in parallel to speed up the overall deployments and now I'm seeing the billable time 6-8 mins higher than the total duration, mostly because a lot of my jobs just happen to go over the minute mark by a few seconds |
Beta Was this translation helpful? Give feedback.
-
this is essentially robbery. |
Beta Was this translation helpful? Give feedback.
-
A good well structured deployment script in Github Actions can involve many quick jobs. In our case we are aproaching 16 (typical execution time of 4-6 seconds), it's a bit unreasonable to be paying for 16 minutes per job for a total of just over 1 minute of CPU time.
This encourages us to create larger monolithic worflows rather than clean and re-usable ones. If per minute rounding was instead applied on the entire workflow this would encourage correct usage and enable more use cases.
I understand the need to make money, just wish you could be fairer with the distribution of costs.
Beta Was this translation helpful? Give feedback.
All reactions