New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

updates to docs (new changelog, add contributing guide) #432

Merged
merged 90 commits into from Oct 30, 2017

Conversation

Projects
None yet
2 participants
@nickhand
Member

nickhand commented Oct 26, 2017

A few updates to the docs:

  • redesign the main page of the docs, adding a short 1 minute example
  • add a contributing guide
  • use extension that defines :issue: role in changelog so we can link to issue or PR numbers - added links to past versions
a standalone script that will reproduce the issue. Issues have a much higher chance
of being resolved quickly if we can easily reproduce the bug.
6. Take a stab at fixing the bug yourself! :ref:`Pull requests <PR-guide>` for

This comment has been minimized.

@rainwoodman

rainwoodman Oct 26, 2017

Member

mention pdb and pdb++?

This comment has been minimized.

@nickhand

nickhand Oct 26, 2017

Member

yes, good idea

rainwoodman and others added some commits Oct 26, 2017

Allow automatic discovery of header from a BigFile.
This improves the usability of reading a saved catalog.
A saved catalog saves the attrs in Header column, but when we
read it in from BigFileCatalog the default was not to use
that column, creating confusions.

Original email report of the issue:
```
I’m having some difficulty saving and reading files using CatalogSource. I have a simple script like below to save the ArrayCatalog data:

    # create the catalogue
    particle_data = {}
    particle_data['Position'] = pos
    particle_data['Velocity'] = vel
    particle_data['ID'] = Id
    cat = NBlab.ArrayCatalog(particle_data, BoxSize=header['boxsize'])
    cat.save(dir+’/Matter', ["Position", "Velocity"])

This saves everything nicely into the Matter directory. But Later when I try to read in this data using something like
cat = BigFileCatalog(‘Matter’)
I get the following: “Error: Dataset length is inconsistent on Position“

Is there something I need to specify in the save command in order for me to properly read the data in using BigFileCatalog?
```
@nickhand

This comment has been minimized.

Member

nickhand commented Oct 26, 2017

new changes:

  • dynamically generate a list of all imported functions in nbodykit.lab for API docs (I find this very useful and it should help with some confusion around the "lab")
  • introduction page is now complete
and concise code that flows from step to step.
Parallel computation with MPI
-----------------------------

This comment has been minimized.

@nickhand

nickhand Oct 26, 2017

Member

@rainwoodman let me know if you have any suggestions for this section in particular

.. code:: bash
$ srun -n 4 python example.py

This comment has been minimized.

@rainwoodman

rainwoodman Oct 26, 2017

Member

On a managed computing facility, it is also usually necessary to include directives that reserves the computing resource used for running the script. Usually you can refer to the 'Submitting a Job' section of the user guide of the facility. (e.g. add a link to NERSC and NCSA; though be aware these user guides are not always accurate., and it is always better to check with someone who uses these facilities first.)

@nickhand nickhand merged commit f19be76 into bccp:master Oct 30, 2017

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls Coverage increased (+0.005%) to 95.344%
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment