You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Strip the dscr API down to as few exported commands as possible, and rename them in snake_case (Naming conventions #44). This is a serious breaking change, and I'd have to assist students in getting all their dscs back on the rails subsequently.
Assist in the process of packaging only the code and .RDS scores (as opposed to raw data or the other cached results of computation) as a Git repo such that the repo size is GitHub-friendly (this is a new one)
At that point, I think that dscr repositories (versioned under Git and hosted by GitHub) should be effective enough to facilitate new methodological collaborations, where new scenarios or methods could be shared via GitHub pull requests. Rerunning other people's computation (to get the non-score stuff) could be done on an as-needed basis, but the idea is that such "auditing" tasks would be rarer than adding code and scores to the repo.
This specifically punts on some of the other directions in the interest of time:
CRAN-ready or otherwise cleanly engineered build (Clean build #38)
Input parsers or any more depth/complexity to the workflow hierarchy (input parser #42)
my only quick comment regarding renaming in snake_case
is that I might prefer to (at least initially) keep the lower-camel-case
functions around, and mark them as deprecated. (ie have them emit a warning,
but still work) as an intermediate step to avoid breaking existing dscs.
dscr
API down to as few exported commands as possible, and rename them insnake_case
(Naming conventions #44). This is a serious breaking change, and I'd have to assist students in getting all their dscs back on the rails subsequently.BatchJobs
registries for parallel execution (Convert dsc to BatchExperiment to allow easy submission of jobs to cluster #23, Integration with batchJobs #46).RDS
scores (as opposed to raw data or the other cached results of computation) as a Git repo such that the repo size is GitHub-friendly (this is a new one)At that point, I think that
dscr
repositories (versioned under Git and hosted by GitHub) should be effective enough to facilitate new methodological collaborations, where new scenarios or methods could be shared via GitHub pull requests. Rerunning other people's computation (to get the non-score stuff) could be done on an as-needed basis, but the idea is that such "auditing" tasks would be rarer than adding code and scores to the repo.This specifically punts on some of the other directions in the interest of time:
@stephens999 This is fully open to debate.
The text was updated successfully, but these errors were encountered: