-
-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run Tasks in Background #762
Comments
This sounds like docker-compose would do very well for this use case. Have you tried using that? |
@ghostsquad docker-compose is not an option
|
Ok, so for clarification, what you need is essentially that of the github.com/oklog/run.Group described below?
|
That describes how we can run things in parallel, but we maybe have multiple layers of parallelism. For example:
The |
Hi @tiloio, thanks for opening this issue. This feature seems to be too specific to be built into Task IMO. Few people would use it. As mentioned, Docker compose is a tool you could consider to achieve that. Or if you want to run these dependencies in our own machine you can either use something like |
@tiloio I somewhat agree with @andreynering on this, it does kind of feel like what docker-compose is already good at. I've written compose files that do exactly what you are asking for: Spin up multiple dependencies, finally spin up a client/test container that runs end2end tests. The exit-code of the client/test container is what is passed thru and becomes the exit code of the compose command, allowing me to use this method in CI and other automation. Though, I am working on another project that might be able to handle this use case. If you would like, I can share my compose file and Taskfile. |
Compose is very specific for containers as OP mentioned above. This is more what It is something that task runners like
|
The following example seems to work to put a
https://stackoverflow.com/questions/3096561/fork-and-exec-in-bash - it kind of looks like it works, but don't know enough if this is advisable or not, would make a nice little docs section next to |
This is the standard POSIX way of doing background processes, but then you have manually manage this. As in, you can't Ctrl+C to kill it, you'll have to manually kill the process when you're done (e.g. If you're running multiple background services / tasks, it quickly becomes very unwieldy and as a result is something you'd want a task runner to help orchestrate.
IIRC it loses the prefix and colors (and is randomly spliced in your terminal as background processes are), so this becomes a bit hard to follow, especially when you have multiple. |
The color for the logs seem to remain and the process exists when task does. |
👀 I wonder if this is because Whenever I ran background processes, I would complete the task and they would still run in my terminal until I manually killed them |
Yeah, maybe process exists due to the |
Be able to run stuff (like database) in background (with waiting mechanisms, e.g. wait for log output or URL returns status code
x
).When running tests or starting an application I depend on other processes which have to run in the background. For example if I have integration tests which use a MongoDB the MongoDB has to be started first and be ready to receive requests. Another example is when I want to start my app which is using this MongoDB, a backend and a frontend I have to start everything by hand in separate terminals.
It would be great if we have something to
release
single task, but where we will still receive the logs.For example when I run my application with
task start
the task could look like this:First the
backend
andfrontend
will start in parallel.The
backend
needs thedatabase
, so thedatabase
is started with docker and we wait max ten times with 100 ms pauses until thedatabase
is reachable via the curl command.After that the
backend
is started withnpm start
. There we wait until thestdout
containsRunning on localhost:8080
with a timeout of 60 seconds.In between the
frontend
was also started withnpm start
. There we wait untilhttp://localhost:4200
returnsHTTP 200 OK
with the task default timeout.The log output may look like that:
And if some of the
released
tasks are logging, you will still see the log in the terminal.E.g. if I hit
/api
on the backend, the log looks like:In the end we started our whole infrastructure and application with just one task command. And we receive all logs of all services and are aware if something went wrong.
What do you think of this approach? Can we build something like this into task?
The text was updated successfully, but these errors were encountered: