-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PySCeS releases #51
Comments
Refer to my comments on the various issues. I'll be able to help as indicated. We need some general consensus on the way forward for some of this. #53 is ready to be merged if you agree with my final comments. More general: Point is, I am really keen to get a 1.0 out and I think the program is at a stage where this is warranted. My concern if we do 0.9.9 now (and this is judging from past experience for both of us) that it is going to be at least another 6 months to 1.0. |
In principle I don't really mind if we make this release 1.0 and the next 1.1 I just won't have time to do the current 1.0 milestones (but if it's just renaming the milestones that's okay) |
@jmrohwer renamed milestones 1.0 and 1.1 and set 1.0 release date for next Friday :-) |
By the way, what was the post2 release that appeared on pypi last week? There is no source bundle or corresponding github release? |
I posted some wheels for Linux and Windows which we urgently needed for the students in my lab. Basically this bug fix: fe69b0d When running a simulation with |
That's okay, just always upload a source bundle otherwise the binary and source installs are out of sync. Also it's easy to create a non-production (pre-release) on GitHub then everyone knows what is going on :-) |
Okay will keep this in mind 😃 |
For any future releases, I propose the following general sequence of events.
I've also create a wiki page as a reminder note https://github.com/PySCeS/pysces/wiki/PySCeS-releases |
👍 on the sequence of events. I suppose that this means after 1.0 is out, development will occur on a separate "development" branch. We might have to find a way to manage bugfix releases. I am happy with the x.y.z numbering scheme you propose. The question is how often these bugfix releases should be. Put differently, if we find a bug that impacts the students in my lab, I would need to push out a fix ASAP. Not that it happens all that often but it happened recently. |
Issues to be sorted out before the 1.0 release.@bgoli please comment, so we are on the same page.
That's about it, feel free to add. |
|
Re 2. I was going to play around with #62 and do some local testing, if it works we might as well do it for 1.0. I guess it would just have to be documented in README.md. Also I'm not sure how Anaconda deals with optional dependencies though. |
The idea behind the development branch is that it becomes the default branch so that all development is forked from there, so it should always be in a state to merge back into the main and create a release on the main branch (which is in sync with your platform releases). A "bugfix" release would then be creating a point release and building the binaries, unfortunately there is no other way to keep all platforms in sync. There are ways to simplify the build process but using a development branch is just one way of working - we can use a different flow. |
Maybe investigate continuous integration and automatic builds? I've seen a lot of this around but never messed with it myself. |
Precisely, I do remember an issue for 1.1 :-) (it's some work to set up but once it is going it works like a charm!) |
👍 on #60 for 1.1 |
What's your take on distributing a PDF of the documentation together with the release packages (both source and binary)? I'm all set up to do it. |
Feel free to look into #62 here is my attempt from another project try:
from setuptools import setup
install_requires_src = ['numpy', 'packaging', 'nose']
extras_requires_src = {
'sympy': ['sympy'],
'glpk': ['swiglpk',],
'sbml': ['python_libsbml', 'lxml',],
'all': ['sympy', 'swiglpk', 'python_libsbml', 'lxml',],
}
except:
from distutils.core import setup
install_requires_src = []
extras_requires_src = {} My question is, if you do setup with no argument does it install 'all' or none, as far as I can tell the later but would be good to confirm? |
I would not do the source but PDF's can be dumped into pycses/docs and should end up in the released packages. |
Yes that was my idea. |
I would also say none, but I will test and confirm. |
In that case make assimulo, libsbml and ipyparsing the individual options and "all" with all of them 👍 Leave the current required_options (currently defined in the setup.py arguments) as install_requires_src. Of course you also need: setup(
...
install_requires=install_requires_src,
extras_requires=extras_requires_src,
...
) |
I've removed assimulo from the requirements so long. It also causes the RTD build to fail as RTD needs to install the modules in requirements.txt for the documentaiton build. |
I tried this (just with the parscan option) and it seems to work. @bgoli would appreciate some testing 😄 Create a new clean virtual environment, say (test)$ pip install numpy
(test)$ pip install --no-cache-dir -i https://test.pypi.org/simple/ --extra-index-url=https://pypi.org/simple/ 'pysces[parscan]' (The initial install numpy is needed otherwise the source build bombs out because there is no numpy on test.pypi; this happens even though --extra-index-url is specified). The other packages are then found on pypi.org. I have only uploaded a source bundle to test.pypi, but if it works (it does on my machine), then a binary bundle will work too. Then in the shell in the virtual environment: (test)$ ipcluster start Open up another shell and activate the same virtual environment (needed to access the ipcluster), start ipython, and run in turn: In [1]: import os
In [2]: import pysces
In [3]: pysces.PyscesModel.MODEL_DIR = os.path.join(pysces.__path__[0], 'pscmodels')
In [4]: testscript = os.path.join(pysces.__path__[0], 'examples', 'testparscanner.py')
In [5]: exec(open(testscript).read()) Let me know if it works 🤞 |
What I can confirm is that if |
Nice work 👍 I can see how this affects a source build but what about a binary release? One test would be if you build a single binary - say Windows Py3.8 - upload it to test.pypi and see if it makes any difference when doing a no-extra-argument pip install vs using an extra argument pip install into an empty environment (no compilers). If you upload the binary I will be happy to test it (I'll try with a local install in the meantime) |
You may need to push your changes? |
I was referring more to the testing part on your side. I'll push if I know it's working. I've been sorting out build issues here with manylinux. Getting late now. Will build the binaries tomorrow and upload to test.pypi. Did you try the pip install from source or don't you have compilers setup in a separate virtualenv? |
@bgoli Okay, binary wheels for Python 3.8 and 3.9 for win_amd64 and for Python 3.9 manylinux2014 are on test.pypi. Would appreciate it if you could test as per this post #51 (comment) To be absolutely sure, you may want to modify the pip install command to make sure it only gets binaries:
You should be able to install EDIT: It appears under Windows the single quotes don't work, but you can leave them out entirely and just do Would appreciate if you could actually also verify that the parallel scanning is working as per the above post. |
Thanks. It works in quite a nice way, for wheels it creates custom entries in the setuptools metadata so the binary will be selected first, if available, and then the dependencies checked locally. Requires-Dist: numpy
Requires-Dist: scipy
Requires-Dist: matplotlib
Requires-Dist: nose
Provides-Extra: all
Requires-Dist: ipyparallel ; extra == 'all'
Requires-Dist: assimulo ; extra == 'all'
Requires-Dist: python-libsbml ; extra == 'all'
Provides-Extra: cvode
Requires-Dist: assimulo ; extra == 'cvode'
Provides-Extra: parscan
Requires-Dist: ipyparallel ; extra == 'parscan'
Provides-Extra: sbml
Requires-Dist: python-libsbml ; extra == 'sbml' Both these work as expected when installed into an environment that only has python, pip and ipython. (tpip38c) pip install -i https://test.pypi.org/simple/ --extra-index-url=https://pypi.org/simple/ pysces
(tpip38d) pip install -i https://test.pypi.org/simple/ --extra-index-url=https://pypi.org/simple/ pysces[parscan] So that all seems to work, I will try check the parallel parameter scan at some point. One suggestion, what about letting "cvode" also pull in libSBML, it is available for more platforms than assimulo and the chances are that if you want to deal with events you have an SBML file? |
Okay thanks will push changes. One further question, what about Anaconda packages? A similar functionality is not (yet?) available for conda. See here: However, the conda dependency resolution is governed by the build recipe not the setup.py. So we could include the "optional" dependencies as full dependencies for a conda build. This is not a problem because all the packages are available for all platforms on conda-forge. If you are worried about the size of the install, the addition of ipyparallel is really minimal since most users would have IPython already in any case. Libsbml is a 4-5Mb download and assimulo anything between 2 and 6 Mb depending on OS. The upside would be that CVODE would be automatically enabled on a conda install. Thoughts? Of course the docs would have to be updated, but I can take care of that once we agree. I actually like this idea, but the alternative is of course also possible, i.e. users having to do |
I'd prefer to have it separate, it's cleaner/neater. As long as it is properly documented. Any user doing this will be at a level where they have a reasonable idea of what they are doing... You may want CVODE (events) support without SBML, I had an MSc student doing events models but building them from scratch w/o SBML. |
For Anaconda I assume everything is on conda-forge so I would say just add everything, size is not the issue but rather increasing the complexity of the dependencies. This is mostly relevant for automated builds/tests (but that is done with PyPI anyway). Wouldn't your student be using Anaconda and getting everything anyway :-) I'm fine with things as they are for now. (EDIT of course you could also do |
@jmrohwer Seems to work ... will try with a slightly larger scan :-) GATHER RESULT
-------------
Parallel calculation completed: 0 min 6 sec
PARSCANNER: 1056 states analysed
Total time taken: 0 min 6 sec
Duration: 6.19 seconds
States per second: 170.5
Speedup with load balanced TaskClient: 0.40
Parallel execution...using RunScatter
parallel engine: ipcluster
MaxMode 2
PREPARATION
-----------
Generated ScanSpace: 0 min 0 sec
PARSCANNER: Tsteps 1056
Scattered ScanSpace on number of engines: 16
Preparation completed: 0 min 0 sec
PARALLEL COMPUTATION
--------------------
Parallel calculation completed: 0 min 1 sec
GATHER RESULT
-------------
Parallel calculation completed: 0 min 1 sec
PARSCANNER: 1056 states analysed
Total time taken: 0 min 1 sec
Duration: 1.78 seconds
States per second: 592.8
Speedup with RunScatter: 1.38
===========
Comparing results...
serial vs. parallel s/s results : True
serial vs. parallel user output : True
TaskClient vs RunScatter s/s results: True
TaskClient vs RunScatter user output: True |
👍 |
I have done a Linux build and tested it, all works as expected 👍 |
cool, probably in the docs but while I'm here is there an easy way to set the number of workers that are created? |
Not for the multiprocessing option. It just uses the mp defaults and creates a Pool equal to the number of CPU cores on your machine. For ipcluster, you set the no. of nodes when creating the cluster: ipcluster start --n=4
ipcluster --help |
Windows build also good. Pulls in everything with the dependencies as specified in the build recipe and setup.py left unchanged. I will thus now update the docs. I really like it that the anaconda packages set up a full working environment with a single one-line command 💯 |
Great 👍 To end with here are some more representative speedup results: GATHER RESULT
-------------
Parallel calculation completed: 0 min 22 sec
PARSCANNER: 52800 states analysed
Total time taken: 0 min 22 sec
Duration: 22.53 seconds
States per second: 2343.3
Speedup with load balanced TaskClient: 5.48
Parallel execution...using RunScatter
parallel engine: ipcluster
MaxMode 2
PREPARATION
-----------
Generated ScanSpace: 0 min 0 sec
PARSCANNER: Tsteps 52800
Scattered ScanSpace on number of engines: 16
Preparation completed: 0 min 0 sec
PARALLEL COMPUTATION
--------------------
Parallel calculation completed: 0 min 17 sec
GATHER RESULT
-------------
Parallel calculation completed: 0 min 18 sec
PARSCANNER: 52800 states analysed
Total time taken: 0 min 18 sec
Duration: 18.54 seconds
States per second: 2848.5
Speedup with RunScatter: 6.66
===========
Comparing results...
serial vs. parallel s/s results : True
serial vs. parallel user output : True
TaskClient vs RunScatter s/s results: True
TaskClient vs RunScatter user output: True |
@bgoli How is this coming on in terms of a release? Let me know if I can help with testing in any way. In terms of working on release-related issues, my time next week is quite limited (I basically have only Monday morning open) as I'm attending an online conference. I can start drafting some 1.0 release notes. What are the most important features you would want to highlight? Obviously CVODE/Assimulo comes to mind, but a non-exhaustive list might be:
Anything else? |
The release notes look good. There is one last issue I want to fix with the SBML export and then I will merge my development branch to main. I picked up a few issues with the Stoichiometric analysis that needed attention but that will be tested in your CI builds and the SBML import/export can be tested as we go along. I hope to merge this weekend as next week is pretty full. |
|
I've assigned a bunch of issues that I think could be addressed for an 0.9.9 release this week (https://github.com/PySCeS/pysces/milestone/1).
Feel free to add or assign yourself issues, whatever can't be done now will be moved to the 1.0 release - as will any new issues (SBML3 support, CI integration, etc.)
The text was updated successfully, but these errors were encountered: