You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After understanding the basic usage of dagu, I have a curious question. Why did Dagu choose disk as the storage? Is there any possible that we have plans to support database access? Usually, as we are familiar with, a physical node, regardless of its storage space, still carries the risk of being fully occupied. At the same time, some local files may also be unexpectedly deleted due to unforeseen failures. Therefore, when we start the dagu service using the command line, do we also face the aforementioned risks?
And I noticed that when Dagu scheduler scheduled tasks, the scheduler uses a "watch mechanism" to detect the update status of various files in the current Dags directory. I feel that this is somewhat similar to ETCD. Of course, Redis and MySQL may also have some "watch mechanisms". So, does it provide a possibility to watch through a database? And, when using Dagu, we submitted a yaml file through the web UI, which describes the overall structure of the task, which actually has similarities with the declarative form of k8s. Therefore, if Dagu's k8s deployment is supported in the future, its access to ETCD or other distributed databases may also be more suitable.
The text was updated successfully, but these errors were encountered:
The reason Dagu did not use DBMS is that I was just too lazy to maintain database. I think its simplicity is a unique point of Dagu compared to other workflow engines such as Temporal. I like the idea of k8s and etcd support though. Maybe we can support both cases in the future.
After understanding the basic usage of dagu, I have a curious question. Why did Dagu choose disk as the storage? Is there any possible that we have plans to support database access? Usually, as we are familiar with, a physical node, regardless of its storage space, still carries the risk of being fully occupied. At the same time, some local files may also be unexpectedly deleted due to unforeseen failures. Therefore, when we start the dagu service using the command line, do we also face the aforementioned risks?
And I noticed that when Dagu scheduler scheduled tasks, the scheduler uses a "watch mechanism" to detect the update status of various files in the current Dags directory. I feel that this is somewhat similar to ETCD. Of course, Redis and MySQL may also have some "watch mechanisms". So, does it provide a possibility to watch through a database? And, when using Dagu, we submitted a yaml file through the web UI, which describes the overall structure of the task, which actually has similarities with the declarative form of k8s. Therefore, if Dagu's k8s deployment is supported in the future, its access to ETCD or other distributed databases may also be more suitable.
The text was updated successfully, but these errors were encountered: