Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model_summary() Intermediate Printout #23

Closed
callistosp opened this issue Mar 3, 2020 · 5 comments · Fixed by #24
Closed

model_summary() Intermediate Printout #23

callistosp opened this issue Mar 3, 2020 · 5 comments · Fixed by #24
Assignees
Labels
bug Something isn't working

Comments

@callistosp
Copy link
Collaborator

Problem:

When a model is still running and model_summary() is called on the results objects, the OUTPUT is tailed multiple times until either 1) the model completes, or 2) number of attempts (30?) is exceeded. This results in the tail of the OUTPUT being continuously printed to console.

 iteration          324 MCMCOBJ=    29416.952359567695     
 iteration          325 MCMCOBJ=    29413.373263451384     
 iteration          326 MCMCOBJ=    29490.598695438199     
 iteration          327 MCMCOBJ=    29430.406526993076     
 iteration          328 MCMCOBJ=    29416.239276600747     
 iteration          329 MCMCOBJ=    29428.134365432856     
 iteration          330 MCMCOBJ=    29445.970213943467     
 iteration          331 MCMCOBJ=    29435.262195693984     
---
Model is still running. Tail of `106/OUTPUT` file: 
---
 ...
 iteration          322 MCMCOBJ=    29399.495862674936     
 iteration          323 MCMCOBJ=    29437.381530674444     
 iteration          324 MCMCOBJ=    29416.952359567695     
 iteration          325 MCMCOBJ=    29413.373263451384     
 iteration          326 MCMCOBJ=    29490.598695438199     
 iteration          327 MCMCOBJ=    29430.406526993076     
 iteration          328 MCMCOBJ=    29416.239276600747     
 iteration          329 MCMCOBJ=    29428.134365432856     
 iteration          330 MCMCOBJ=    29445.970213943467     
 iteration          331 MCMCOBJ=    29435.262195693984     
---
Model is still running. Tail of `106/OUTPUT` file: 
---

Proposed solution:

If model is still running, tail of OUTPUT should only be printed a single time.

@callistosp callistosp added the bug Something isn't working label Mar 3, 2020
@callistosp
Copy link
Collaborator Author

callistosp commented Mar 3, 2020

Update: this appears to be related to a change in the last PR into develop. Even when importing the results from the completed model, it still thinks the model did not run and looks for OUTPUT.

> restest <- rbabylon::import_result(file.path(MODEL_DIR, "101.yaml")) 
> restest %>% model_summary(.wait=0) 
/data/rbabylon-example-project/model/pk/101/101.ext file does not look finished but there is also no `101/OUTPUT` file. Your run may have failed. 
--- 
Model is still running. Tail of `101/OUTPUT` file:  
---
FALSEError in nonmem_summary(.res, ...) :    101.ctl is not finished and 0 second wait time has elapsed. Check back later or increase `.wait`.
--

 
  |  
 

@dpastoor
Copy link
Contributor

dpastoor commented Mar 4, 2020

seth and i discussed that this morning some, I think this model_summary needs an overhaul.

Here is what I am thinking - model_summary() should only work if the model is completed.

  1. we need to define how we can reliably detect completion, as quickly as possible. For example scanning the entire ext file for -1000000 lines means reading that entire file, which could be quite large.

  2. It should just return either a model result, or print a message saying model not completed, no summary avialable, and return NULL

  3. we need to separately provide monitoring functions that can handle whatever waiting the scientist may want

spec %>% submit_model() %>% wait_until_complete() %>% summarize_model()

the reason that waiting should be moved outside of specific submission is this allows composability and other customization. For example, maybe we want to provide a wait function that also has enough logic to know about failures - basically a specific tryCatch

spec %>% submit_model() %>% 
    wait_until_complete(error = function(e) {
       # some details of what to do if error occurs
    }) %>% 
    summarize_model()

@callistosp
Copy link
Collaborator Author

that seems like a reasonable expectation to me. if you're pulling out wait_until_complete, it would also be nice to have a wait_until_starts. e.g. if I'm shooting off some runs overnight before I leave work, I would run spec %>% submit_model %>% wait_until_starts() to verify that the run started before I shutdown my workflow

@seth127 seth127 linked a pull request Mar 5, 2020 that will close this issue
@seth127
Copy link
Collaborator

seth127 commented Mar 5, 2020

@callistosp this will be closed when the linked PR merges. The fix is that model_summary() just stops with an informative error if it can't get a summary (see tests mentioned below). We are intending to make a better version of check_nonmem_progress() in a future release, and this will be incorporated into model_summary() under the hood.

If you have thoughts on how you would like that to work, like some of the ones you put in the comments above, you can open a new issue with the desired functionality and tag it with the milestone roadmap.

Tests

  • tests/testthat/test-summary.R
    • model_summary() fails predictably if it can't find some parts (i.e. model isn't finished)
    • model_summary() fails predictably if no .lst file present

@seth127
Copy link
Collaborator

seth127 commented Mar 10, 2020

We're closing this because there is another issue to deal with this:
#31

@seth127 seth127 closed this as completed Mar 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants