Skip to content
Parquet plugin for Intake
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
conda update install instructions [skip ci] Jan 17, 2019
docs
examples
intake_parquet
tests
.gitattributes
.gitignore
.travis.yml update manifest Dec 3, 2018
LICENSE
MANIFEST.in
README.md
readthedocs.yml
requirements.txt
setup.cfg Modernify Apr 16, 2018
setup.py
versioneer.py

README.md

Intake-parquet

Build Status Documentation Status

Intake data loader interface to the parquet binary tabular data format.

Parquet is very popular in the big-data ecosystem, because it provides columnar and chunk-wise access to the data, with efficient encodings and compression. This makes the format particularly effective for streaming through large subsections of even larger data-sets, hence it's common use with Hadoop and Spark.

Parquet data may be single files, directories of files, or nested directories, where the directory names are meaningful in the partitioning of the data.

Features

The parquet plugin allows for:

  • efficient metadata parsing, so you know the data types and number of records without loading any data
  • random access of partitions
  • column and index selection, load only the data you need
  • passing of value-based filters, that you only load those partitions containing some valid data (NB: does not filter the values within a partition)

Installation

The conda install instructions are:

conda install -c conda-forge intake-parquet

Examples

See the notebook in the examples/ directory.

You can’t perform that action at this time.