Skip to content

Commit

Permalink
Another try to fix docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jpn-- committed Mar 16, 2017
1 parent dffdab1 commit 4c431a4
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 85 deletions.
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def __getattr__(cls, name):
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'numpydoc',
# 'numpydoc',
# 'sphinxcontrib.napoleon',
'sphinx.ext.napoleon',
# 'sphinx.ext.viewcode',
Expand Down
83 changes: 3 additions & 80 deletions doc/openmatrix.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,88 +17,11 @@ fun. :sup:`[citation needed]`
Importing Data
--------------

.. np:method:: OMX.import_datatable(filepath, one_based=True, chunksize=10000, column_map=None, default_atom='float32')
.. automethod:: OMX.import_datatable

Import a table in r,c,x,x,x... format into the matrix.

The r and c columns need to be either 0-based or 1-based index values
(this may be relaxed in the future). The matrix must already be set up
with the correct size before importing the datatable.

Parameters
----------
filepath : str or buffer
This argument will be fed directly to the :func:`pandas.read_csv` function.
one_based : bool
If True (the default) it is assumed that zones are indexed sequentially starting with 1
(as is typical for travel demand forecasting applications).
Otherwise, it is assumed that zones are indexed sequentially starting with 0 (typical for other c and python applications).
chunksize : int
The number of rows of the source file to read as a chunk. Reading a giant file in moderate sized
chunks can be much faster and less memory intensive than reading the entire file.
column_map : dict or None
If given, this dict maps columns of the input file to OMX tables, with the keys as
the columns in the input and the values as the tables in the output.
default_atom : str or dtype
The default atomic type for imported data when the table does not already exist in this
openmatrix.
.. automethod:: OMX.import_datatable_3d



.. np:method:: OMX.import_datatable_3d(filepath, one_based=True, chunksize=10000, default_atom='float32')
Import a table in r,c,x,x,x... format into the matrix.

The r and c columns need to be either 0-based or 1-based index values
(this may be relaxed in the future). The matrix must already be set up
with the correct size before importing the datatable.

This method is functionally the same as :meth:`import_datatable` but uses a different implementation.
It is much more memory intensive but also much faster than the non-3d version.

Parameters
----------
filepath : str or buffer
This argument will be fed directly to the :func:`pandas.read_csv` function.
one_based : bool
If True (the default) it is assumed that zones are indexed sequentially starting with 1
(as is typical for travel demand forecasting applications).
Otherwise, it is assumed that zones are indexed sequentially starting with 0 (typical for other c and python applications).
chunksize : int
The number of rows of the source file to read as a chunk. Reading a giant file in moderate sized
chunks can be much faster and less memory intensive than reading the entire file.
default_atom : str or dtype
The default atomic type for imported data when the table does not already exist in this
openmatrix.


.. np:method:: OMX.import_datatable_as_lookups(filepath, chunksize=10000, column_map=None, log=None, n_rows=None, zone_ix=None, zone_ix1=1, drop_zone=None)
Import a table in r_or_c,x,x,x... format into the matrix.

The r_or_c column needs to be either 0-based or 1-based index values
(this may be relaxed in the future). The matrix must already be set up
with the correct shape before importing the datatable.

Parameters
----------
filepath : str or buffer
This argument will be fed directly to the :func:`pandas.read_csv` function.
chunksize : int
The number of rows of the source file to read as a chunk. Reading a giant file in moderate sized
chunks can be much faster and less memory intensive than reading the entire file.
column_map : dict or None
If given, this dict maps columns of the input file to OMX tables, with the keys as
the columns in the input and the values as the tables in the output.
n_rows : int or None
If given, this is the number of rows in the source file. It can be omitted and will
be discovered automatically, but only for source files with consecutive zone numbering.
zone_ix : str or None
If given, this is the column name in the source file that gives the zone numbers.
zone_ix1 : 1 or 0
The smallest zone number. Defaults to 1
drop_zone : int or None
If given, zones with this number (typically 0 or -1) will be ignored.
.. automethod:: OMX.import_datatable_as_lookups


.. |idca| replace:: :ref:`idca <idca>`
Expand Down
13 changes: 9 additions & 4 deletions py/model_reporter/xhtml.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,7 @@ def xhtml_report_v0(self, cats=None, raw_xml=False, throw_exceptions=False, file
The report content. You need to save it to a file on your own,
if desired.
>>> m = larch.Model.Example(1, pre=True)
>>> from larch.util.temporaryfile import TemporaryHtml
>>> html = m.xhtml_report_v0()
Expand Down Expand Up @@ -1299,7 +1300,8 @@ def xhtml_artparams(self, groups=None, display_inital=False, display_id=False, *
-------
larch.util.xhtml.Elem
A div containing the model parameters.
>>> from larch.util.categorize import Categorizer, Renamer
>>> m = larch.Model.Example(1, pre=True)
>>> param_groups = [
Expand Down Expand Up @@ -1333,7 +1335,8 @@ def xhtml_ll(self,**format):
-------
larch.util.xhtml.Elem
A div containing the model parameters.
>>> from larch.util.xhtml import XHTML
>>> m = larch.Model.Example(1, pre=True)
>>> m.xhtml('title', 'll')
Expand Down Expand Up @@ -1581,7 +1584,8 @@ def xhtml_ch_av(self,max_alts=50,**format):
-------
larch.util.xhtml.Elem
A div containing the summary statistics for choice and availability.
>>> from larch.util.xhtml import XHTML
>>> m = larch.Model.Example(1, pre=True)
>>> m.df = larch.DT.Example('MTC')
Expand Down Expand Up @@ -1714,7 +1718,8 @@ def xhtml_utilitydata(self,**format):
-------
larch.util.xhtml.Elem
A div containing the summary statistics for choice and availability.
>>> from larch.util.xhtml import XHTML
>>> m = larch.Model.Example(1, pre=True)
>>> m.df = larch.DT.Example('MTC')
Expand Down

0 comments on commit 4c431a4

Please sign in to comment.