Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: Atramhasis: A webbased SKOS editor #5040

Closed
editorialbot opened this issue Jan 5, 2023 · 91 comments
Closed

[REVIEW]: Atramhasis: A webbased SKOS editor #5040

editorialbot opened this issue Jan 5, 2023 · 91 comments
Assignees
Labels
accepted Mako published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Ruby Track: 7 (CSISM) Computer science, Information Science, and Mathematics

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jan 5, 2023

Submitting author: @koenedaele (Koen Van Daele)
Repository: https://github.com/OnroerendErfgoed/atramhasis
Branch with paper.md (empty if default branch): joss_paper
Version: 1.3.2
Editor: @danielskatz
Reviewers: @gaurav, @SvenLieber
Archive: 10.5281/zenodo.7733994

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/f7013d5847c4ed748cf5dd42a761a830"><img src="https://joss.theoj.org/papers/f7013d5847c4ed748cf5dd42a761a830/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/f7013d5847c4ed748cf5dd42a761a830/status.svg)](https://joss.theoj.org/papers/f7013d5847c4ed748cf5dd42a761a830)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@gaurav & @SvenLieber, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @danielskatz know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @gaurav

📝 Checklist for @SvenLieber

@editorialbot editorialbot added Mako Python review Ruby Track: 7 (CSISM) Computer science, Information Science, and Mathematics labels Jan 5, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.99 s (222.7 files/s, 118929.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          75           1450            933          64456
CSS                              5           2916           5158          21684
JavaScript                      36            561            303           5277
Sass                            16            942           2462           1456
Jinja Template                  22            160             10           1344
JSON                             6              0              0           1153
reStructuredText                14            896           1516           1145
YAML                             4             35              4           1074
HTML                            23             37              3            779
XML                              3              0              0            600
INI                              6             88              1            327
PO File                          3            130             94            327
DOS Batch                        1             29              1            212
make                             1             28              6            143
Markdown                         2             21              0            136
TeX                              1              6              0             69
Mako                             1              7              0             15
Ruby                             1              6             12              9
Bourne Shell                     1              0              0              1
-------------------------------------------------------------------------------
SUM:                           221           7312          10503         100207
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 860

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5194/isprs-annals-IV-2-W2-151-2017 is OK
- 10.1016/j.websem.2016.03.003 is OK
- 10.5281/zenodo.6984378 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@danielskatz
Copy link

@gaurav & @SvenLieber - Thanks for agreeing to review this submission.
This is the review thread for the paper. All of our communications will happen here from now on.

As you can see above, you each should use the command @editorialbot generate my checklist to create your review checklist. @editorialbot commands need to be the first thing in a new comment.

As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#5040 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if either of you require some more time. We can also use editorialbot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@danielskatz) if you have any questions/concerns.

@gaurav
Copy link

gaurav commented Jan 5, 2023

Review checklist for @gaurav

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/OnroerendErfgoed/atramhasis?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@koenedaele) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@gaurav
Copy link

gaurav commented Jan 17, 2023

Hi everybody,

Thank you so much for an opportunity to review this manuscript! I got stuck pretty early on in the review process because the Atramhasis repository doesn't have installation instructions. I tried following the installation instructions at your readthedocs.io page page, but running initialize_atramhasis_db development.ini resulted in the following error:

raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (MarkupSafe 2.0.1 (/Users/gaurav/Development/open-source/atramhasis/venv/lib/python3.9/site-packages), Requirement.parse('MarkupSafe>=2.1.1'), {'werkzeug'})

Could you please copy the basic instructions for starting Atramhasis into the repository's README file? I'll keep trying to get Atramhasis to run, but if you have any insights into why running that command resulted in a version conflict, please let me know!

Here are my comments on the GitHub repository so far:

  • JOSS asks: "Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support". While you clearly meet the first requirement in clearly described in CONTRIBUTING.md, I think a few sentences encouraging users to file issues or seek support at https://github.com/OnroerendErfgoed/atramhasis/issues would be nice.

Here are my comments on the manuscript so far:

  • JOSS wanted me to check whether "the full list of paper authors seem appropriate and complete". I noticed in your repository's contributors list that Maarten Taeymans contributed significantly to this software but isn't credited as a co-author or in the acknowledgements. I will accept your claim if you say that they didn't contribute significantly to the software as it currently exists, but I just wanted to note this in case it was an oversight on your part.
  • Summary
  • Statement of need
    • Your first paragraph does a good job of explaining why you built Atramhasis. However, I think a sentence or two on how it compares to other similar tools (such as Skosmos) and what distinguishes it from them would be useful here.
    • It might be useful to make explicit who your target audience is: do you have a sense of the size of the thesaurus or the number of users that Atramhasis can support? Can a single Atramhasis instance support multiple thesauri? Do you anticipate thesaurus creators running Atramhasis locally on their computer when they want to edit a thesaurus, or do you anticipate an organization setting up an instance that multiple users can use?
    • "Concept uris" -- I've generally seen "uris" capitalized as "URIs", and I think that makes sense here.
    • It's "JSON-LD", not "JSON/LD". I think a citation to the W3 spec at https://www.w3.org/TR/json-ld11/ would be useful too.
    • I understand that you don't have enough space here to describe the Linked Data Fragments (LDF) server in any detail, but I would include a link to your [documentation on this functionality]
      (https://atramhasis.readthedocs.io/en/latest/development.html#running-a-linked-data-fragments-server) in your manuscript. If there is an LDF endpoint at https://thesaurus.onroerenderfgoed.be/, I think that would be useful to link to as well!
  • Minor changes, not required but suggested
    • I notice that you reference figures by saying e.g. "edit data in a normal web admin interface \autoref{fig:editingairfields}.". This appears as "... interface Figure 2." in the generated PDF. I would recommend either saying "in a normal web interface, as seen in \autoref{fig:editingairfields}." or "in a normal web interface (\autoref{fig:editingairfields})." to make it a bit more readable.

@koenedaele
Copy link

koenedaele commented Jan 17, 2023

Thank you for agreeing to review our software! For installation, I would recommend creating a demo environment via https://atramhasis.readthedocs.io/en/latest/demo.html. This should work with python 3.8, 3.9 and 3.10.

We'll check into that version conflict. Looks like you're running Python 3.9?

@koenedaele
Copy link

Update paper with correct citation of SKOS, URIs, correct spelling of JSON-LD and reference to JSON-LD 1.1 and better reference to figure 2.

@SvenLieber
Copy link

SvenLieber commented Jan 17, 2023

Review checklist for @SvenLieber

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/OnroerendErfgoed/atramhasis?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@koenedaele) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@koenedaele
Copy link

@gaurav I've expanded the list of authors of the paper with the people who were involved in development in a significant way, although some of them have left our organisation and were not involved in this paper.

@SvenLieber
Copy link

Software paper review

Hi everyone,

thanks for inviting me as a reviewer, the software package seems very interesting!
In this comment I would first like to focus on the Software Paper part of the review. Also, because I had issues installing the package. But following the JOSS guidelines I will open a separate issue in the tool's repository shortly and link back to this review-issue.

First of all, I would like to congratulate the authors and all contributors for this software. The tool seems to be actively maintained on GitHub by using different features such as issues and milestones. Additionally, the tool seems to be successfully used in production. I gladly tick the "Substantial Scholarly Effort" box after also have checked the cited peer-reviewed paper from 2017 with contributions of the current authors. That original paper nicely outlines different issues such as efficient algorithms for searching the possibly infinite depth tree structure of thesauri or the different semantics for broader relations and how they are tackled.

The software paper has a clear description of the high-level functionality with given examples. Thus, I would consider it properly described also for non-specialists.

The statement of need highlights that the tool is useful for users without knowledge of SKOS or RDF, it provides a UI abstracting these specifications for two different types of users: users consulting a thesauri and the editors of thesauri. Working in the cultural heritage field myself, I acknowledge that more user-friendly tools are needed. The rest of the section lists several features with some descriptions about why there are useful, e.g. drop-down lists and widgets for end-users or convenient REST services for developers. The sentence about the Linked Data Fragments server could be more descriptive, why was this chosen and what need is fulfilled with it? This would help readers who are not familiar with LDF better understand the decision decision of including it for example compared to a regular SPARQL endpoint.

I mentioned the distinction between users and editors made in the paper, which I find a very interesting point. Here, the paper could outline better who the audiences actually are, currently I find descriptions such as " ..that allows users to create, maintain and consult", "A concept being something a researcher wants to describe", or "editors do not write RDF statements". I think a more clear description of the different types of users and their needs would be beneficial, also with respect to my following point.

There is no dedicated state of the field section. Yet, for SKOS editors there are several existing Open Source solutions with different trade-offs, such as SkoHub, Skosmos or even the more general Protégé, WebProtégé, or VocBench.
For example, the paper mentions that "Atramhasis was written to be a lightweight open source SKOS editor", but what is precisely meant with "lightweight"? SkoHub for instance, is basically a static-site generator where all editing flows are realized using Git(Hub). On the one hand this can be considered "lightweight", because not much software or authentication flows need to be implemented. Yet, SkoHub requires knowledge of Git for thesaurus editors which, especially in the cultural heritage field, may represent a hurdle.
For a reader and potential user of the presented Atramhasis software, an objective comparison would be helpful (this can be based on very high-level features). Besides the given example of needed knowledge about for example Git vs a dedicated admin user interface for Atramhasis, it might be interesting how Atramhasis differentiates in terms of software license from the other mentioned SKOS editors. A detailed comparison of performance would be out of scope for this submission, but a basic state of the field I find necessary, it also is a review criteria.

Overall, the paper is well written. The references are fine for a paper of this length (besides the fact of the missing state of the field which may require additional references or footnotes to point to other SKOS editors).

I'm happy to answer any questions and help the authors improve their submission.
As mentioned, I will provide more comments for the Functionality and Documentation parts of the review.

@SvenLieber
Copy link

Functionality Review

After running the demo and playing around with the tool a bit, here my review in addition to related opened issues (see above).

Overall I found the tool intuitive to use and did not encounter any issues. I browsed existing concept schemes and edited existing concepts via the admin interface. Based on these example scenarios, I would consider that the tool has a state-of-the-art web interface. For example, when opening the admin menu on the right the main window greys out which makes clear that the focus is now on the menu. Content of a concept scheme in the menu on the right is also loaded on-demand which makes sense for large concept schemes and which I consider also a common functionality.

In the following I will only focus on a few minor things.

  • I noticed that the RDF representation uses the datatype integer for dcterms:identifier. According to the DublinCore specification this is not wrong as any literal can be used. However, it feels un-intuitive as I think doing arithmetic with identifiers is not a use case and considering also that identifiers might contain letters. But I have seen, that there are already issues regarding that, and that this might change in future versions.
    The provided dump_rdf commandline tool comes in very handy and it is a nice addition, that it also offers the possibility to output HDT files.
  • Some URIs of concepts are URNs, for example concepts created via imports from a skosprovider. These URNs are highlighted as clickable URIs and users not aware of the principles behind the SemanticWeb might get confused if such a URN is not resolvable. Would it make sense to render URNs as text? Or alternatively have a redirect service to a permanent URI in the system?
  • After adding a new concept to a concept scheme, I noticed that the list of concepts on the right was not updated. A reload was necessary. This is a minor issue, but I would have expected that some sort of event gets triggered and the list will be updated with the newly added concept. An impatient user might create a concept several times or may contact the support for such kind of things.
  • I never used the tool cookiecutter before, but it seemed to work. However, the small issues I encountered during setting up the demo because of an outdated version of pip and setuptools (see linked issue above) could have been avoided with a docker demo. I have seen that the documentation lists such a method, but also with a warning that it was written for a different version and might not work anymore, which why I did not try it. Docker is widely used and I think it would be an added value if the docker setup would be updated and made available also for the current version of the tool.

One rather odd thing is, that I was not able to create new concept scheme via the UI. However, this was also not claimed in the software paper and the documentation clearly says that concept schemes have to be created via different interfaces. Hence the following paragraph is just some feedback and my thoughts regarding this.

I was wondering, is the addition of the feature "create concept scheme from the UI" foreseen in the future? The creation of new concept schemes seems like a very basic functionality to me, especially when starting to use the tool.
Thinking further in this direction: In the cited paper from 2017, one lesson learned was that a thesaurus manager was appointed for the use case of Flanders Heritage. This makes very much sense. Picking up again the different types of possible users from my software paper review above, I think such a thesaurus manager is an interesting role to consider for the future. A thesaurus manager is likely a domain expert and may require a UI. I can imagine a use case where only a user with the role thesaurus manager can create new concept schemes and is the only one with editing rights, or could grant further editing rights to other users for the thesauri (concept schemes) s/he created.
From an implementation point of view, this would of course require the addition of roles which also may depend on the used authentication/authorization mechanism.

I will provide a different comment for the review of the documentation early next week.

@koenedaele
Copy link

I would like to thank both reviewers for their valuable feedback. I've already made some changes and opened some tickets for further changes. I'll try to address all comments over the next few days and weeks. If i do miss some, please let me know.

@danielskatz
Copy link

danielskatz commented Jan 25, 2023

@koenedaele - are you still working on this? Are there specific changes the reviewers should look at, at this point? Or should we wait until you have done more?

@koenedaele
Copy link

@danielskatz I'm still working on this. I've linked a ticket in the Atramhasis repo that contains the requested changes so I can track them. Some of them were quite simple, some of the requests on more information require more work. I will ask for a new review once I have a version that I'm reasonably happy with. My main worry is that answering all requests for extra information pushes the paper beyond the normal size limits for joss. But I'm first writing those parts and we can see if parts need to be cut again.

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/csism-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4051, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Mar 14, 2023
@danielskatz
Copy link

danielskatz commented Mar 14, 2023

Also, the DOI above (10.1007/978-3-030-00668-6_15) appears to be a correct one for the reference - if you agree, please add it to the entry in your bib file, and let me know, then I'll regenerate the proof

@koenedaele
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6984378 is OK
- 10.55465/rsdd4339 is OK
- 10.55465/mmym4330 is OK
- 10.1007/978-3-030-00668-6_15 is OK
- 10.5194/isprs-annals-IV-2-W2-151-2017 is OK
- 10.1016/j.websem.2016.03.003 is OK
- 10.5194/isprsannals-II-5-W3-323-2015 is OK
- 10.2139/ssrn.3198999 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@koenedaele
Copy link

@danielskatz I've update the reference and the Zenodo title

@danielskatz
Copy link

@editorialbot recommend-accept

@koenedaele - again this will generate the proof that I'll next proofread, since I didn't with the previous version

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6984378 is OK
- 10.55465/rsdd4339 is OK
- 10.55465/mmym4330 is OK
- 10.1007/978-3-030-00668-6_15 is OK
- 10.5194/isprs-annals-IV-2-W2-151-2017 is OK
- 10.1016/j.websem.2016.03.003 is OK
- 10.5194/isprsannals-II-5-W3-323-2015 is OK
- 10.2139/ssrn.3198999 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/csism-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4052, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@danielskatz
Copy link

@koenedaele - I've made some suggestions for small changes in OnroerendErfgoed/atramhasis#804 - please merge this or let me know what you disagree with, then we can continue the process of acceptance and publication.

@koenedaele
Copy link

@danielskatz Thank you! I've merged the changes.

@danielskatz
Copy link

@editorialbot recommend-accept

hopefully final proof...

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.6984378 is OK
- 10.55465/rsdd4339 is OK
- 10.55465/mmym4330 is OK
- 10.1007/978-3-030-00668-6_15 is OK
- 10.5194/isprs-annals-IV-2-W2-151-2017 is OK
- 10.1016/j.websem.2016.03.003 is OK
- 10.5194/isprsannals-II-5-W3-323-2015 is OK
- 10.2139/ssrn.3198999 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/csism-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#4053, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@danielskatz
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05040 joss-papers#4054
  2. Wait a couple of minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05040
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Mar 14, 2023
@danielskatz
Copy link

Congratulations to @koenedaele (Koen Van Daele) and co-authors!!

And thanks to @gaurav and @SvenLieber for reviewing!
We couldn't do this without you

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05040/status.svg)](https://doi.org/10.21105/joss.05040)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05040">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05040/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05040/status.svg
   :target: https://doi.org/10.21105/joss.05040

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@koenedaele
Copy link

@danielskatz @gaurav @SvenLieber Thanks for your time and effort. This has been the best experience I've ever had in submitting a paper due to the tooling you're using and the very open and friendly review process. I've volunteerd to review as well.

@arfon
Copy link
Member

arfon commented Aug 17, 2023

@editorialbot reaccept

@editorialbot
Copy link
Collaborator Author

Rebuilding paper!

@editorialbot
Copy link
Collaborator Author

🌈 Paper updated!

New PDF and metadata files 👉 openjournals/joss-papers#4494

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Mako published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review Ruby Track: 7 (CSISM) Computer science, Information Science, and Mathematics
Projects
None yet
Development

No branches or pull requests

6 participants