{{INTRO}}
As simple as executing:
pip install outputty
Code time!:
>>> from outputty import Table
>>> my_table = Table(headers=['name', 'age']) # headers are the columns
>>> my_table.append(('Álvaro Justen', 24)) # a row as tuple
>>> my_table.append({'name': 'Other User', 'age': 99}) # a row as dict
>>> print my_table # a text representation of Table
+---------------+-----+
| name | age |
+---------------+-----+
| Álvaro Justen | 24 |
| Other User | 99 |
+---------------+-----+
>>> print 'First row:', my_table[0] # Table is indexable
First row: [u'\xc1lvaro Justen', 24]
>>> print 'Sum of ages:', sum(my_table['age']) # you can get columns too
Sum of ages: 123
>>> my_table.write('csv', 'my-table.csv') # CSV plugin will save its contents in a file
>>> # let's see what's in the file...
>>> print open('my-table.csv').read()
"name","age"
"Álvaro Justen","24"
"Other User","99"
>>> # let's use HTML plugin!
>>> print my_table.write('html') # without filename ``write`` will return a string
<table>
<thead>
<tr class="header">
<th>name</th>
<th>age</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td>Álvaro Justen</td>
<td>24</td>
</tr>
<tr class="even">
<td>Other User</td>
<td>99</td>
</tr>
</tbody>
</table>
Table
have a lot of other features. To learn more (by examples), read outputty tutorial and see examples
folder. Enjoy!
Yes, there are a lot of features to add (it's just the begining). If you want to contribute, please see our outputty wishlist.
You can also use the outputty Issue Tracking System on GitHub to report bugs.
If you want to contribute to this project, please:
- Install dependencies for development running
pip install -r requirements/development.txt
. - Execute
make test
to run all tests -- please run all tests before pushing.- To run just one test file execute:
nosetests --with-coverage --cover-package outputty tests/test_your-test-file.py
- Try to have a test-coverage of 100%.
- Use Test-driven development.
- To run just one test file execute:
- Use nvie's gitflow - to learn, read A Successful Git branching model.
- Create/update documentation (README/docstrings/man page)
- Do NOT edit
README.rst
andtutorial.rst
, editREADME-template.rst
ortutorial-template.rst
instead and runmake create-docs
to create the newREADME.rst
andtutorial.rst
(before committing). The tutorial will be created based on files inexamples
folder.
- Do NOT edit
If you want to create a new plugin to import/export from/to some new resource, please see files outputty/plugin_*.py
as examples. They are so simple, please follow these steps:
- Create a file named
outputty/plugin_name.py
, wherename
is the name of your plugin. - Create
read
and/orwrite
functions in this file. These functions receive theTable
object and optional parameters.read
: should read data from the resource specified in parameters and put this data inTable
(usingTable.append
orTable.extend
).write
: should read data fromTable
(iterating over it, using slicing etc.) and write this data to the resource specified in parameters.
- Call your plugin executing
my_table.write('name', optional_parameters...)
ormy_table.read('name', optional_parameters...)
(wherename
is your plugin's name) - when you execute itoutputty
will calloutputty.plugin_name.read
/outputty.plugin_name.write
.
Your plugin's read
function must put all data inside in unicode and your plugin's write
function will receive a Table
object with all data in unicode (it should not change this). But if you need to decode/encode before/after doing some actions in your plugin, you can use Table.decode()
and Table.encode()
.
{{AUTHORS}}
outputty-like:
- tablib: format-agnostic tabular dataset library.
- PyTables: package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data.
- csvstudio: Python tool to analyze csv files.
- csvsimple: a simple tool to handle CSV data.
- toolshed: less boiler-plate.
- buzhug: a fast, pure-Python database engine.
Data analysis:
- pyf: framework and platform dedicated to large data processing, mining, transforming, reporting and more.
- pygrametl: Python framework which offers commonly used functionality for development of Extract-Transform-Load (ETL) processes.
- etlpy seems to be a dead project.
- orange: data visualization and analysis for novice and experts.
- Ruffus: lightweight python module to run computational pipelines.
- webstore: web-api enabled datastore backed onto sql databases
Command-line tools:
Other:
- pyspread: non-traditional spreadsheet application.