-
Couldn't load subscription status.
- Fork 17
Description
I work with an R Shiny application, which defines persistent workers as follows:
# Configure async daemons
setup_daemons = function(){
daemons(6, cleanup = FALSE, dispatcher = TRUE)
prepare_workers = everywhere({
options(myapp.option1 = TRUE)
Sys.setenv(MY_CONFIG = "production")
# Load required libraries
required_libs = c("data.table", "mypackage1", "mypackage2")
lapply(required_libs, library, character.only = TRUE)
})
while(unresolved(prepare_workers)){
Sys.sleep(1)
}
}
# Initial setup
setup_daemons()
shinyApp(ui = ui, server = server)As you can see, required libraries are loaded at start-up through everywhere, and the daemons are set to be persistent through cleanup=F.
I was expecting that this would ensure the daemons are always running and the libraries are always loaded. However, in the course of execution, usually after some time and some tasks have already been run, workers were starting to take very long to resolve the tasks.
Upon investigation, this was caused by loading libraries. This means that either libraries were unloaded/cleanup was performed, or the workers had died and were being re-spawned.
Reactivity is essential in my application, so I've had to resort to this:
check_workers_alive = function(){
# Check if required packages are loaded in all daemons
load_status = everywhere({
required_packages = c("data.table", "mypackage1", "mypackage2")
all(required_packages %in% (.packages())) &&
"myapp.option1" %in% names(options()) &&
getOption("myapp.option1")
})
results <- unlist(collect_mirai(load_status))
if(!all(results)){
warning("Required packages not loaded in all daemons, reloading...")
return(FALSE)
} else {
return(TRUE)
}
}
# Periodic check - outside of shinyApp server - one task for all sessions/connected clients
observe({
invalidateLater(1000*60*5, session = NULL) # Check every 5 minutes
if(!check_workers_alive()){
setup_daemons() # Reload everything
}
})basically running periodical checks and reloading the necessary tools if required.
As an extra precaution, I also add a special function single_worker_check_and_load, (which performs the same load check and loads libraries if needed) inside each mirai call in case the 30-second interval is not enough.
My question is twofold:
- is this spontaneous daemon dying/lib unloading expected?
- is this the best/only pattern to circumvent it? It feels a bit wasteful to run verification code every 5 minutes. Is there a built-in way to ensure the daemon pool is always "ready"?