Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: The Walrus: the fastest calculation of hafnians, Hermite polynomials and Gaussian boson sampling #1705

Closed
whedon opened this issue Sep 3, 2019 · 112 comments
Assignees
Labels

Comments

@whedon
Copy link
Collaborator

@whedon whedon commented Sep 3, 2019

Submitting author: @nquesada (Nicolás Quesada)
Repository: https://github.com/xanaduAI/thewalrus
Version: v0.10.1
Editor: @katyhuff
Reviewers: @amitkumarj441, @poulson
Archive: 10.5281/zenodo.3585911

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67"><img src="https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67/status.svg)](https://joss.theoj.org/papers/0aa4dd795abc7ed56c3c23a314fd4c67)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@amitkumarj441 & @poulson, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @katyhuff know.

Please try and complete your review in the next two weeks

Review checklist for @amitkumarj441

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@nquesada) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @poulson

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@nquesada) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 3, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @PhilipVinc, @poulson it looks like you're currently assigned to review this paper 🎉.

⭐️ Important ⭐️

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 3, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 3, 2019

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 18, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 18, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 18, 2019

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 18, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 18, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Sep 18, 2019

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

I am back from overseas travel and am now working on the review. I was able to run the python tests without a hitch, but I filed issue XanaduAI/thewalrus#56 to suggest fixing the hardcoded eigen3 path in tests/Makefile.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

Submitted XanaduAI/thewalrus#57 regarding the default Makefile rule in examples/ leading to an error.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

I am very impressed by the library of algorithms for computing halfnians, but I am worried there is insufficient justification for the claim that your library is the "fastest" given the lack of benchmarks relative to pre-existing libraries.

For example, on the subject of computing permanents, you have the wonderful documentation here:
https://the-walrus.readthedocs.io/en/latest/permanent_tutorial.html
but the forecasts for computing the permanent of a 35 x 35 matrix seem to be just under an hour. Whereas the timings at https://codegolf.stackexchange.com/questions/97060/calculate-the-permanent-as-quickly-as-possible?rq=1 seem to require just a few minutes.

Also, as noted, both Mathematica and Maple contain library routines for computing permanents. I recognize that these are both proprietary packages -- but I believe the 'fastest' claims demand at least some connection to whatever is closest to state-of-the-art.

As a sidenote, I extended examples/example.cpp to compute halfnians up to size 30 x 30 after increasing nmax from 10 to 15 and noticed that double-precision numbers overflow starting at m=15. Have you considered falling back to higher precision if overflow is detected?

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 22, 2019

Thanks @poulson ! We are working on fixing the Makefile and also on improving the permanent code. Regarding being "the fastest", I guess we really like the idea of the a Walrus being the fastest, at something ;) . More to the point, we never claimed anything about permanents, they are in the library for historical reasons, but they are not mentioned in the scope of the library and appear only in the paper submission to provide some historical context. We have done hafnian bench marking (cf. the JEA paper in the bibliography) and also looked at the codegolf implementations.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

Thank you for the fast response, @nquesada! I see what you mean about the claim being specific to hafnians, but you also point out that haf([0, W; W^T, 0]) = perm(W), so it is possible to at least make a qualitative connection between hafnian runtimes and best-in-class permanent runtimes.

But you are right that the codegolf timings at https://codegolf.stackexchange.com/questions/157049/calculate-the-hafnian-as-quickly-as-possible relative to https://the-walrus.readthedocs.io/en/latest/hafnian_tutorial.html and Fig. 5 of https://arxiv.org/pdf/1805.12498.pdf appear more relevant.

It seemed possible that I was misinterpreting the much faster claims of the codegolf, so I downloaded a copy of the C++ source from the "miles" submission and modified the datatype (TYPE) from int to double, and disabled multithreading by making S equal to 0. I then generated a random 52 x 52 matrix by forming a 52 x 52 matrix with entries independently drawn from the symmetric normal distribution, added it onto its transpose, then clipped entries to {-1, 0, 1} based upon mapping (-infinity, -1] to -1, (-1, 1) to 0, and [1, infinity) to 1. Running with a single core on my laptop took 67 seconds with Miles's code compiled with -O3.

As an aside, I originally tried for a 104 x 104 matrix, but Miles's code led to a segfault.

I then modified the examples/example.cpp driver in thewalrus by including #include <chrono> then appending the following into the main function, just before return 0;:

    {
      std::vector<double>  mat{
        // The 52^2 {-1, 0, 1} entries of the 52 x 52 matrix go here.
      };
      std::cout << "Starting 52 x 52 hafnian calculation." << std::endl;
      auto start = std::chrono::steady_clock::now();
      double hafval = libwalrus::loop_hafnian(mat);
      auto end = std::chrono::steady_clock::now();
      std::chrono::duration<double> elapsed_seconds = end - start;

      // print out the result
      std::cout << hafval << "(" << elapsed_seconds.count() << " seconds.)"
                << std::endl;
    }

I recompiled with g++ example.cpp -std=c++11 -O3 -Wall -I/usr/include -I../include -I/home/poulson/.local/eigen3 -fopenmp -march=native -c and the code has been running (with OMP_NUM_THREADS=1) for quite some time (more than 10 minutes). I can edit in the time when it completes.

Is it possible that the code from miles in the codegolf is incorrect? I am happy to share the random 52 x 52 {-1, 0, 1} matrix I generated and the results from his submission.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

It has been another 21 minutes and thewalrus is still running. I also read a bit more and Miles's code is an implementation of Algorithm 2 of https://arxiv.org/pdf/1107.4466.pdf, which you cite as the "Recursive Algorithm" from [5] in https://the-walrus.readthedocs.io/en/latest/algorithms.html.

However, I see that this algorithm is only defined for non-loop hafnians, and I am comparing it against the loop hafnian. I will redefine a random symmetric 52 x 52 matrix with zero diagonal and retest with Miles's implementation of the recursive algorithm against your thewalrus::hafnian, rather than thewalrus::loop_hafnian.

Please excuse the confusion!

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 22, 2019

Hi @poulson . The code from @eklotek (aka "miles") is correct. The algorithm he implemented, which comes from a paper by A. Bjorklund, scales like O(n^5 2^{n/2}) where n is the size of the matrix. The code from https://arxiv.org/pdf/1805.12498.pdf scales like O(n^3 2^{n/2}) which is asymptotically faster, but in real life (not n \to \infty) the algorithm implemented by miles and derived by A. Bjorklund in 2012 is faster.
This algorithm, the one derived by A. Bjorklund in 2012 and referred to as the recursive algorithm in the documentation of the walrus is the default option for calculating hafnians. That being said you should not compare against the code golf one because, at least when called from python, all the calculations are done in quad precision. As a matter of fact the the algorithm used by the walrus is a quad precision openMPd version of the Bjorklund 2012 algorithm implemented by miles. This is all acknowledged in the source code https://github.com/XanaduAI/thewalrus/blob/master/include/recursive_hafnian.hpp

Hope this clarifies the confusion!

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 22, 2019

Just saw your latest message. The problem with loop hafnians is that as far as I know, and I asked A. Bjorklund about this, there is no generalization of the "recursive" algorithm to loop hafnians.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

I looked through the linked source code and you seem to mean long double when you say "quad precision", but, according to the wikipedia entry on long double: "With gcc on Linux, 80-bit extended precision is the default". This has been my experience in practice, and it seems to only be older chips that supported hardware 128-bit arithmetic which had compilers mapping long double to 128-bit IEEE.

I recompiled my modification of Miles's source code to use TYPE as long double and saw the runtime for the 52 x 52 matrix increase from ~1 minute 20 seconds to ~9 minutes 30 seconds, with the same answer produced in both cases.

But, to be fair, I am calling example/example.cpp with an std::vector<double> input and believe it is doing its work using double, not long double.

I have been testing the exact same matrix against thewalrus::hafnian (not thewalrus::loop_hafnian) and it seems to be taking much more time. I now realize that this is due to what you mentioned above, that the asymptotically faster Bjorklund 2012 algorithm is used by thewalrus::hafnian, and one must explicitly call thewalrus::hafnian_recursive.

When I modified the example driver to call thewalrus::hafnian_recursive, it took about 9 minutes, though this is on top of the loop_hafnian and hafnian routines still running, so take this with a grain of salt. Either way, this is noticeably slower than the Miles code run with TYPE equal to double (about a minute and 20 seconds).

One simple optimization it seems you could make -- please correct me if I am wrong -- is to change the signature of
https://github.com/XanaduAI/thewalrus/blob/master/include/recursive_hafnian.hpp#L44,

template <typename T>
inline T recursive_chunk(std::vector<T> b, int s, int w, std::vector<T> g, int n)

to pass b and g as const references rather than by value since you don't seem to modify either in the routine.

EDIT: Switching to const references knocked the hafnian_recursive time down from 8 minutes to 7.5 minutes and preserved the result.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 22, 2019

I think it would be fair to say that it is the responsibility of the unqualified interfaces (i.e., hafnian and loop_hafnian) to route to the best algorithms based upon the matrix dimensions. There is such a large difference in runtime that such a simple fix seems worthwhile.

EDIT: For context, both loop_hafnian and hafnian took about three hours on the 52 x 52 matrix. The former was 11,188 seconds, and the latter 10, 473 seconds. hafnian_recursive took 480 seconds as-is, 450 seconds with the const reference change mentioned above, and the @eklotek implementation took about 80 seconds.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 23, 2019

I believe the rest of the time difference -- from @eklotek's 80 seconds up to the ~450 second "const reference" version of hafnian_recursive -- will largely be explained by you reordering the loops in the main computational kernel, which were originally:

        for (int u = 0; u < n; u++) {
            TYPE *d = e+u+1,
                  p = g[u], *x = b+(T(s)-1)*m;
            for (int v = 0; v < n-u; v++)
                d[v] += p*x[v];
        }

        for (int j = 1; j < s-2; j++)
            for (int k = 0; k < j; k++)
                for (int u = 0; u < n; u++) {
                    TYPE *d = c+(T(j)+k)*m+u+1,
                          p = b[(T(s-2)+j)*m+u], *x = b+(T(s-1)+k)*m,
                          q = b[(T(s-2)+k)*m+u], *y = b+(T(s-1)+j)*m;
                    for (int v = 0; v < n-u; v++)
                        d[v] += p*x[v] + q*y[v];
                }

so that the majority of array accesses are no longer unit stride, leading to large performance degradations due to decreased cache reuse. Said code segment seems to have been translated to

    for (u = 0; u < n; u++) {
        for (v = 0; v < n - u; v++) {
            e[u + v + 1] += g[u] * b[v];

            for (j = 1; j < s - 2; j++) {
                for (k = 0; k < j; k++) {
                    c[(n + 1) * (j * (j - 1) / 2 + k) + u + v + 1] +=
                        b[(n + 1) * ((j + 1) * (j + 2) / 2) + u]
                        * b[(n + 1) * ((k + 1) * (k + 2) / 2 + 1) + v]
                        + b[(n + 1) * (k + 1) * (k + 2) / 2 + u]
                        * b[(n + 1) * ((j + 1) * (j + 2) / 2 + 1) + v];
                }
            }
        }
    }

with the u loop now on the outside, and the k loop on the interior. Notice that the updates to c are now with stride n+1 instead of stride 1, and some of the reads of b are also now of highly non-uniform stride.

If you preserve the original data access patterns, I suspect you will see a substantial performance improvement.

@eklotek

This comment has been minimized.

Copy link

@eklotek eklotek commented Sep 23, 2019

I'm glad to see that code is still of some interest. I'll try to address a few of the concerns. This is all from memory so some parts may be a bit fudged.

The segfault you experienced is most likely due to how I used stack space to store the submatrices that were visited recursively. Switching to malloc there shouldn't encounter that problem, but will be a bit slower with the 2^n times you will be calling malloc/delete on small matrices at the leaves.

The large performance speedup that algorithm 2 demonstrates is from reusing the previously "squeezed" submatrices instead of only applying the squeeze at the end (leaves of the binary tree). Since the values of n are small (even n = 50 for a 100x100 matrix would take over a month on a standard desktop) most speedups aren't realized due to large constant overhead from the setup/initialization/convergence. Using FFT/Karatsuba (in my attempts) doesn't improve convolution speed over the simple discrete pattern (which is also an easy target for vectorization).

The convolution pattern was used after noticing that vectorization wasn't happening good since the data visited had a non-simple access pattern from the four nested loops. Their version hoists the 2 innermost loops out to the top, which affects the matrix since it is no longer being accessed with stride 1. Transposing the matrix could be beneficial.

Thanks for adopting the code even though I never made a proper commit, and thanks for investigating its runtime in more depth. It was originally only intended for integer use, but the TYPE define was for experimenting with floats for FMA vectorization.

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Sep 27, 2019

Hey @poulson : I looked into the problem of having "overflows" in examples/example.cpp when you use all one matrices of size ~ 30. It turns out it is not an overflow, it is a problem with diagonalizing the very special matrices you get when you pass an all ones matrix; in this case the matrices that pow_trace hast to diagonalize are also all ones matrices and for some reason eigen does not get the right answer. We observed the same problem back in the day when we called LAPACKE from C. To test this I created a matrix with all ones outside the diagonal on some other values in the diagonal. Since the diagonal does not matter you should get the same result (n-1)!! (for an n x n matrix) and indeed in this case you do get the right result. Here is the code:

#include <iostream>
#include <complex>
#include <vector>
#include <libwalrus.hpp>
#include <stdlib.h>
#include <time.h>

int main() {


  int m =14;
  // create a 2m*2m all ones matrix
  int n = 2 * m;
  std::vector<std::complex<double>> mat(n * n, 1.0);

  for(int i=0;i<n;i++){
    mat[n*i+i]=1.0/(1.0+i); # set to 1.0 to recover the bug!
  }
  
  // calculate the hafnian
  std::complex<double> hafval = libwalrus::hafnian(mat);
  // print out the result
  std::cout << n << " " << hafval << std::endl;
  
  return 0;
};

There are at least two ways to go around this. One is to change the signature of all the function to pass success/fail argument that comes from the diagonalization routines. Another one is to internally change the values in the diagonal of any input matrix so that they are not all equal.

@poulson

This comment has been minimized.

Copy link

@poulson poulson commented Sep 28, 2019

@nquesada I would be very surprised if the same bug appeared in LAPACK's Aggressive Early Deflation Hessenberg QR as in Eigen's adaptation of EISPACK via JAMA.

It is worth noting that -- somewhat ironically -- Eigen's generic eigensolver is grossly suboptimal for even moderate sized matrices. But maybe this is irrelevant for you given the small matrices.

But, at this point, my main concern is that your library is currently not justified in its claim to be the "fastest" calculation of "hafnians, Hermite polynomials and Gaussian boson sampling", nor the GitHub repo's claim as being "The fastest exact library for hafnians, torontonians, and permanents for real and complex matrices."

I recommend modifying the recursive hafnian formulation to use unit stride access in the manner originally used by @eklotek (or using its transpose) and making this algorithm the default when one calls thewalrus::hafnian, and modifying the GitHub repo's title to only claim the fastest hafnian computation.

@katyhuff

This comment has been minimized.

Copy link
Member

@katyhuff katyhuff commented Sep 28, 2019

@PhilipVinc have you been able to start your review process?

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

I'm also going to proof-read the paper shortly

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Dec 19, 2019

Thanks so much @danielskatz !

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

The paper looks good to me, with the possible exception of some cases in the references. For example, should "Speedup in classical simulation of gaussian boson sampling" be "Speedup in classical simulation of Gaussian boson sampling"? And should "Franck-condon factors via compressive sensing" be "Franck-Condon factors via compressive sensing"? Please check cases in the references, and use {}s to protect cases from processing conversion.

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Dec 19, 2019

For the Zenodo submission, is there a quick tutorial on how to do that?

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

For the Zenodo submission, is there a quick tutorial on how to do that?

See https://guides.github.com/activities/citable-code/

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

But you can also tar the repo and deposit it in Zenodo, figshare, or an institutional repository manually

@josh146

This comment has been minimized.

Copy link

@josh146 josh146 commented Dec 19, 2019

Hi @danielskatz, we've corrected the bibliography per your suggestion, and uploaded to Zenodo (https://zenodo.org/record/3585911#.Xfvb09lyZhE). The DOI is 10.5281/zenodo.3585911. Thanks!

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

Is the version v0.10.0 ?

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

@whedon set 10.5281/zenodo.3585911 as archive

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

OK. 10.5281/zenodo.3585911 is the archive.

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

@whedon generate pdf

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

Attempting PDF compilation. Reticulating splines etc...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Dec 19, 2019

It is 0.10.1

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

@whedon set v0.10.1 as version

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

OK. v0.10.1 is the version.

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

@whedon accept

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

Attempting dry run of processing paper acceptance...
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019


OK DOIs

- 10.1007/BF02781659 is OK
- 10.1007/978-3-319-51829-9 is OK
- 10.1016/0304-3975(79)90044-6 is OK
- 10.1017/S0013091500011299 is OK
- 10.1137/1.9781611973099.73 is OK
- 10.1103/PhysRevA.100.032326 is OK
- 10.1063/1.5086387 is OK
- 10.1103/PhysRevA.100.022341 is OK
- 10.1145/3325111 is OK
- 10.1103/PhysRevLett.119.170501 is OK
- 10.1103/PhysRevA.98.062322 is OK
- 10.1103/PhysRevA.50.813 is OK
- 10.1016/j.jmva.2007.01.013 is OK
- 10.1002/(SICI)1098-2418(1999010)14:1<29::AID-RSA2>3.0.CO;2-X is OK
- 10.22331/q-2019-03-11-129 is OK
- 10.1088/0305-4470/34/31/312 is OK

MISSING DOIs

- None

INVALID DOIs

- None
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

Check final proof 👉 openjournals/joss-papers#1189

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#1189, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true
@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

@whedon accept deposit=true

@whedon whedon added the accepted label Dec 19, 2019
@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

Doing it live! Attempting automated processing of paper acceptance...
@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

Thanks for @amitkumarj441 & @poulson for reviewing and @katyhuff for editing!

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 openjournals/joss-papers#1190
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01705
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@danielskatz

This comment has been minimized.

Copy link

@danielskatz danielskatz commented Dec 19, 2019

Congratulations to @nquesada and co-authors!

@whedon

This comment has been minimized.

Copy link
Collaborator Author

@whedon whedon commented Dec 19, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01705/status.svg)](https://doi.org/10.21105/joss.01705)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01705">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01705/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01705/status.svg
   :target: https://doi.org/10.21105/joss.01705

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@nquesada

This comment has been minimized.

Copy link

@nquesada nquesada commented Dec 19, 2019

Thanks @poulson @amitkumarj441 @katyhuff @danielskatz for your refereeing/editorial work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.