Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove Fortran code #481

Merged
merged 18 commits into from
Oct 5, 2018
Merged

Conversation

dtnguyen2
Copy link
Contributor

@dtnguyen2 dtnguyen2 commented Aug 1, 2018

Release Note

Sherpa does not rely on any Fortran routines anymore, except those compiled in XSPEC. The existing Fortran code, which was used mainly in some of the optimization routines, has been replaced by equivalent C++ code. This means that gfortran is no longer required for building Sherpa, although a version of libgfortran compatible with the one used to link the XSPEC libraries would still be needed at runtime if the XSPEC extension is built.

Notes

  1. Had to changed on sherpa thead (lev3fft.py) test: changed src.fwhm from 0.0441858 to 0.044178 to get it to pass.
  2. Enabled LevMar to return covariance matrix and changed output of fitted pars to include errors

@dtnguyen2
Copy link
Contributor Author

LevMar does not handle parameters limits/bounds as well as possible, ie it does not transform. Having the code in C/C++ now should make transition easier for the implementer.

@olaurino
Copy link
Member

@dtnguyen2 all the tests are failing because of a residual statement that should be removed. Is the PR missing a commit or should the statement be removed?

@olaurino
Copy link
Member

The code did not compile with certain compilers as the C++ methods were incorrectly called. I fixed that.

There were more residual usages of the fortran flags that prevented the build from happening, and I removed those too.

One potential issue with this PR is that it requires a new dependency, autograd, which does not have a conda package available. I added the dependency to the python code, which should hopefully make the travis tests work, but I wouldn't know today how to package the conda binaries for release, as autograd cannot be included as a dependency. There is a package in conda-forge, but up until now I have tried to stay away from conda-forge, as it seems to be more harm than good.

@olaurino
Copy link
Member

Unfortunately including autograd clashes with numpy 1.11, which we still use as a baseline version for some of the tests. I tried updating it to 1.12, let's see what happens.

@olaurino
Copy link
Member

I am also bumping the minimum version of matplotlib we test against to 2, as 1.5 was incompatible with numpy 1.12. At this point I am just trying to get the tests to pass, we'll need to figure out what to do with the dependencies and the tests.

@dtnguyen2
Copy link
Contributor Author

autograd is not needed (it was some left-over work which should have been rm from my work on autodiff). I tried to push the changes but was unable to do cause I guess Omar had messed with it:

error: failed to push some refs to 'https://github.com/dtnguyen2/sherpa'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

@olaurino
Copy link
Member

git fetch
git rebase @{u}
git push

@dtnguyen2
Copy link
Contributor Author

(sherpa3) [dtn@devel12 sherpa]$ git fetch
remote: Counting objects: 21, done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 21 (delta 12), reused 20 (delta 11), pack-reused 0
Unpacking objects: 100% (21/21), done.
From https://github.com/dtnguyen2/sherpa
7aeef04..40f0522 no-optmethods-Fortran -> myfork/no-optmethods-Fortran
(sherpa3) [dtn@devel12 sherpa]$ git rebase @{u}
First, rewinding head to replay your work on top of it...
error: Your local changes to the following files would be overwritten by checkout:
setup.cfg
Please commit your changes or stash them before you switch branches.
Aborting
could not detach HEAD
(sherpa3) [dtn@devel12 sherpa]$

@olaurino
Copy link
Member

git reset --hard
git rebase @{u}
git push

@dtnguyen2
Copy link
Contributor Author

done

(sherpa3) [dtn@devel12 sherpa]$ git reset --hard
HEAD is now at e30c14bf Removed autograd dependency which is not used anyway
(sherpa3) [dtn@devel12 sherpa]$ git rebase @{u}
First, rewinding head to replay your work on top of it...
Applying: Removed autograd dependency which is not used anyway
(sherpa3) [dtn@devel12 sherpa]$ git push

(gnome-ssh-askpass:15783): Gtk-WARNING **: cannot open display:
error: unable to read askpass response from '/usr/libexec/openssh/gnome-ssh-askpass'
Username for 'https://github.com': dtnguyen2

(gnome-ssh-askpass:15792): Gtk-WARNING **: cannot open display:
error: unable to read askpass response from '/usr/libexec/openssh/gnome-ssh-askpass'
Password for 'https://dtnguyen2@github.com':
Counting objects: 5, done.
Delta compression using up to 20 threads.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 468 bytes | 0 bytes/s, done.
Total 5 (delta 4), reused 0 (delta 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
To https://github.com/dtnguyen2/sherpa
40f0522..af86d08 no-optmethods-Fortran -> no-optmethods-Fortran
(sherpa3) [dtn@devel12 sherpa]$

@olaurino
Copy link
Member

All right. I'll revert the changes I made, cancel all the tests on travis, and push again. Fingers crossed.

@olaurino
Copy link
Member

@dtnguyen2 two tests are failing on Linux and four on macOS. They are all tests requiring xspec.

See
https://travis-ci.org/sherpa/sherpa/jobs/416088554

and
https://travis-ci.org/sherpa/sherpa/jobs/416088556


class LevMar(OptMethod):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of the docstring should be retained; at least the

"""Levenberg-Marquardt optimization method."""

part but ideally more.

@olaurino
Copy link
Member

olaurino commented Oct 2, 2018

Rebasing to clean history and fix conflicts. Was a5be52b

@DougBurke
Copy link
Contributor

DougBurke commented Oct 2, 2018

I did a fit with this branch and got the following screen output, where the +/- error term is new. I assume this is part of the change here. We need to document this: what do we say about these error estimates (how are they calculated and how reliable are they)?

In [25]: ui.fit()
Dataset               = 1
Method                = levmar
Statistic             = chi2gehrels
Initial fit statistic = 675034
Final fit statistic   = 6738.63 at function evaluation 9
Data points           = 3827
Degrees of freedom    = 3825
Probability [Q-value] = 5.63697e-165
Reduced statistic     = 1.76173
Change in statistic   = 668296
   mdl.c0         244.128      +/- 0.996649    
   mdl.c1         -0.00677439  +/- 0.000155806 

EDITED Aha: this functionality was always there, it's just that the previous LevMar routine didn't return the covariance matrix, so the numbers were never displayed before.

@dtnguyen2
Copy link
Contributor Author

dtnguyen2 commented Oct 2, 2018 via email

The main focus is on adding the error estimates in the FitResults.format
output. Several other numerical values have been tweaked (change in
numerical precision and dictionary ordering), none of which have any
significant change to the results.
@DougBurke
Copy link
Contributor

I've added two minor doc edits - one is to the FitResults class to point out the change from covarerr to covar and the other is to the "quick guide" in the Sphinx documentation to add the "new"

    parameter +/- covarerr

output we see from fitting with LevMar.

@@ -606,8 +891,10 @@ static PyMethodDef WrapperFcts[] = {
FCTSPEC(difevo, py_difevo),
FCTSPEC(nm_difevo, py_difevo_nm),
FCTSPEC(lm_difevo, py_difevo_lm),
FCTSPEC(cpp_lmder, py_lmder),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need cpp_lmder crreated/exported as I can't see a place where it is being used?

@dtnguyen2
Copy link
Contributor Author

dtnguyen2 commented Oct 2, 2018 via email

@dtnguyen2
Copy link
Contributor Author

dtnguyen2 commented Oct 2, 2018 via email

@DougBurke
Copy link
Contributor

Dan, I'm reviewing this PR as requested by Omar, so I'm meant to be looking at your new version.

@dtnguyen2
Copy link
Contributor Author

dtnguyen2 commented Oct 2, 2018 via email

@DougBurke
Copy link
Contributor

Ta - so it sounds like cpp_lmder is probably a useful thing to have around for future experiments (e.g. maybe providing an optimiser which can use analytical derivatives?).

@DougBurke
Copy link
Contributor

@olaurino I've had a look at the code and - modulo the fact that I've skipped over the C++ side of it completely - it looks fine to me (as of commit 03244c2 aka "increase tolerance for tests that failed with older xspec").

You can consider this an "official" sign off, assuming that the only remaining changes are "test wibbles" (or perhaps doc fixes). I think the fact that the best-fit results in the tests aren't changing here (at least within the tolerance) is as good a sign as my review. Let me know if there's something else ion this you wanted a comment on.

@olaurino olaurino added this to the 4.10.1 milestone Oct 4, 2018
@olaurino
Copy link
Member

olaurino commented Oct 4, 2018

I wouldn't mind an additional check on the README changes. I think I'll merge everything tomorrow morning and kick off the release process.

@olaurino olaurino changed the title Removed Fortran opt methods Remove Fortran code Oct 4, 2018
@DougBurke
Copy link
Contributor

DougBurke commented Oct 4, 2018 via email

@@ -509,9 +504,6 @@ used, but the full path should be in your own copy of the file):
have been issues seen using the CIAO binaries on certain OS X
systems.

In all cases, the same version of `gfortran` should be used to build
Sherpa and XSPEC, in order to avoid possible incompatibilities.

If there are problems building, or using, the module, then the other
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have this paragraph which talks about gfortran_xxx settings. This seems at odds with our brave, new, Fortran-Free world. I think I added this paragraph, but I don't know of a situation where we've had to change these settings (which is not a solid proof that they are not needed, I admit).

I think we should leave this paragraph in for now, but when we get to the XSPEC 12.10.0 PRs we can look at whether this paragraph still has any use.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll clarify that part. Gfortran is not needed to build the xspec extension. Only libgfortran is, and it needs to be the same (or same version) as the one used to build xspec.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would help to include a setup.cfg.xspec with some of the options preset to a common scenario, with placeholders for the actual paths the user will have to change.

@dtnguyen2
Copy link
Contributor Author

A couple of comments about the tolerance. When I changed the minpack code (lmdif) from Fortran to C++ I only had to change one test:
the sherpa thead (lev3fft.py) test: changed src.fwhm from 0.0441858 to 0.04417
note that the fit statistics remained unchanged for this thread so I suspect that is due to the resolution of the data. Subsequent tests that failed which Omar had to lowered the tolerance were due to errors in the covariance errors (I have added the covar errors from the Fortran code before switching to C++). So we are talking about the errors on the errors an insignificant amount.

If you look carefully at the LevMar.cc file, you will notice that I have valgrind the c++ code (outside of sherpa). One can compile the test from typing the following (documented in the code itself):
g++ -ansi -pedantic -Wall -O3 -I. -I../../../include -I../.. -DtestLevMar.cc LevMar.cc -o levmar
The model functions are run on the tests published in the following paper (the same authors as the original lmdif code itself):

//
// J. MORE', B. GARBOW & K. HILLSTROM,
// "Algorithm 566: Fortran Subroutines for Testing Unconstrained
// Optimization Software.", ACM TOMS, VOL. 7, PAGES 14-41 AND 136-140, 1981
//

Whenever possible I generalized the test model functions from a fixed number of parameters to a general number of free parameters. For example, the Rosenbrock in the aforementioned paper were written for npar=2, in my test version npar can be any even number etc...

I included the covariance matrix for the Levenberg-Marquardt algorithm, cause I will (someday) intend to do the same for the NelderMead algorithm since they are useful for Lev3 work (get_draws needs a covariance matrix and sherpa's covar command is iffy at best, never mind about covariance from int_unc). As I mentioned before, I have always wanted to transform the fitted parameters to handle parameter boundaries. In theory, I could do that from Fortran, but that is not going to be handy for my next job.

I have thoroughly tested the C++ code, and I very confident that it will be fine

@olaurino
Copy link
Member

olaurino commented Oct 5, 2018

I share your concern on #503. However we are several weeks behind schedule and I need to start working on the actual release. We can still remove #503 if necessary. Or, if we reach a conclusion today, I can remove it right away. Either way, I'll start working on the release.

If there are problems building, or using, the module, then the other
options may need to be set - in particular the `gfortran_lib_dirs` and
`gfortran_libraries` settings.

It is important that the `gfortran`-related options above point to a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DougBurke does this seem OK?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I am merging this. If we need to reword we can do it anyway.

olaurino added a commit that referenced this pull request Oct 5, 2018
@olaurino olaurino merged commit 806df16 into sherpa:master Oct 5, 2018
olaurino added a commit that referenced this pull request Oct 5, 2018
DougBurke added a commit to DougBurke/sherpa that referenced this pull request Oct 5, 2018
PR sherpa#481 means that we no longer create _minim/_minpack modules in
sherpa.optmethods, so they can be removed from the list of modules
that need to be mocked when building the Sphinx documentation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants