-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI builds #5
Comments
I wonder if it isn't worth breaking this issue into three? To be completed in this order.
@justinalsing, just a suggestion based on personal experience: |
Following the discussion on the main review thread about the JOSS requirements, I’ve added a new notebook ( I’ve added a section to the README pointing the user to this notebook as a set of tests/validation for the full set of implemented models. I’ve also added comments in the CONTRIBUTING file, guiding contributors to include a similar test/demo of any new models implemented in this notebook. Regarding testing the MPI script, as I discussed on the main review thread I really included this to make non MPI-savvy users lives easier rather than as an integral/critical part of the package. But I’ve emphasized in the README that the MPI script can be tested by running it with certain parameters set so such that it runs very short MCMC chains (and hence runs through very quickly). Look forward to hearing whether you think this is all OK! Best, |
Thanks @justinalsing ! I just started and apparently there is a file missing: # Import required modules
import pystan
import matplotlib.pyplot as plt
import numpy as np
import time
import scipy.interpolate as interpolate
import netCDF4
import pickle
import scipy.stats as stats
from utils.utils import *
from utils.dlm import *
%matplotlib inline
|
@Chilipp ah, thanks! I forgot to add it - I just added it now and pushed so it should be there now (sorry for that!) |
Thank you @justinalsing. Unfortunately it it seems like I cannot run the test 2 for dlm_vanilla_ar2 because of the following error. Can you confirm this error?
Here is some information about the working environment: conda list
conda info
|
Hi @Chilipp - sorry about that (I forgot to add the updated stan models on my last commit). I've added and pushed it, and tested end-to-end by re-cloning, re-compiling the models and running through the dlm validation notebook. Apologies for not doing this check beforehand - as far as I am aware it is fully functional now, just make sure you re-compile the models after you pull the latest, thanks! |
Hi @justinalsing, I'll likely get time some evening to look at this in the coming week. Do I understand correctly that you've done something like:
Correct? |
@taqtiqa-mark great, thanks! Yes that's correct |
Dear @justinalsing, sorry for being so slow in this. I ran through your tests and they run through. However, there was another issue with the filenames of your regressors in the notebooks. Please note that UNIX filesystems are case-sensitive, there either please change the filenames of your regressors or change them in the notebook. As an example: your file name is regressors/ENSO_MEI_1950_201802.txt but in your notebook you use regressors/enso_mei_1950_201802.txt. The same happened in the dlm_tutorial notebook. But I have to say that I like your tests and they are really useful to judge the quality of dlmmc. |
Hi @Chilipp - not a problem at all, and thanks for the comment. I've now made the regressor filenames case consistent with the actual filenames in the notebook examples (and pushed the changes) Thanks |
CI builds are not a blocking issue for me. |
Hi @justinalsing. Strictly speaking, this issue is not yet solved (although it does not affect the acceptance of openjournals/joss-reviews#1157). But if you close it, I assume this means that you do not plan to add CI builds within the near future? |
Hi @Chilipp - that's right, I don't envisage adding CI builds in the near future (although it is on my to do list) |
hey @justinalsing! As part of the software review in openjournals/joss-reviews#1157 I create this issue to discuss the installation of automated tests using Continuous integration services (such as Travis) (see my comment (openjournals/joss-reviews#1157 (comment)) and the one of @taqtiqa-mark (openjournals/joss-reviews#1157 (comment)).
In your README file you state that the tutorial serves as a test for your suite (see ccf6b83). Nevertheless, this is not an automated test and (in terms of transparency) you should use any of the available CI services that runs your tests on every commit (e.g. Travis CI). I know, this is tedious but all open source projects that you are relying on went through this process, so why shouldn't yours as well?
Minimal requirements for your automated tests are:
jupyter nbconvert --execute
on your notebooksAdditionally, citing openjournals/joss-reviews#1157 (comment)
You should definitely think about tests to check the mathematical correctness of your models since this is the central part of your code.
Comparing to some reference R-project, etc. would be ideal, of course. Alternatively you could just use the trend of your tutorial dataset (here you know, that this one is correct) and implement a unittest that runs your four models and compares it to this reference trend. You are the expert of your software so it's up to you which metric and which accuracy you are expecting of your model (but nevertheless, please justify your choice).
Parallel test suite that exercises the parallel calculations. You could for example run your
dlm_lat_alt_mpi_run.py
script in your test suiteIf you address this issue in a commit or pull request, please include the link to this issue (#5) in the commit/PR-message. In that way, it will be easier for us to follow the implementation.
The text was updated successfully, but these errors were encountered: