Skip to content

Commit

Permalink
Updating references to ContinuumIO -> intake (#279)
Browse files Browse the repository at this point in the history
* Updating README with new org.

* Replacing ContinuumIO -> intake

* Ensure test_dat fails
  • Loading branch information
jsignell committed Feb 25, 2019
1 parent 90aadf4 commit 4f28e41
Show file tree
Hide file tree
Showing 9 changed files with 35 additions and 34 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Intake: A general interface for loading data

![Logo](https://github.com/ContinuumIO/intake/raw/master/logo-small.png)
![Logo](https://github.com/intake/intake/raw/master/logo-small.png)

[![Build Status](https://travis-ci.org/ContinuumIO/intake.svg?branch=master)](https://travis-ci.org/ContinuumIO/intake)
[![Coverage Status](https://coveralls.io/repos/github/ContinuumIO/intake/badge.svg?branch=master)](https://coveralls.io/github/ContinuumIO/intake?branch=master)
[![Build Status](https://travis-ci.org/intake/intake.svg?branch=master)](https://travis-ci.org/intake/intake)
[![Coverage Status](https://coveralls.io/repos/github/intake/intake/badge.svg?branch=master)](https://coveralls.io/github/intake/intake?branch=master)
[![Documentation Status](https://readthedocs.org/projects/intake/badge/?version=latest)](http://intake.readthedocs.io/en/latest/?badge=latest)
[![Join the chat at https://gitter.im/ContinuumIO/intake](https://badges.gitter.im/ContinuumIO/intake.svg)](https://gitter.im/ContinuumIO/intake?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Waffle.io - Columns and their card count](https://badge.waffle.io/ContinuumIO/intake.svg?columns=all)](https://waffle.io/ContinuumIO/intake)
[![Waffle.io - Columns and their card count](https://badge.waffle.io/intake/intake.svg?columns=all)](https://waffle.io/intake/intake)


Intake is a lightweight set of tools for loading and sharing data in data science projects.
Expand All @@ -19,7 +19,7 @@ Intake helps you:

Documentation is available at [Read the Docs](http://intake.readthedocs.io/en/latest).

Status of intake and related packages is available at [Status Dashboard](https://continuumio.github.io/intake-dashboard/status.html)
Status of intake and related packages is available at [Status Dashboard](https://intake.github.io/status)

Install
-------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ It can take a while, sometimes, for Binder to come up, please have patience.
See also the `example data`_ page, containing data-sets which can be built and installed
as conda packages.

.. _example data: https://github.com/ContinuumIO/intake/tree/master/examples
.. _example data: https://github.com/intake/intake/tree/master/examples


General
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ smooth the progression of data from developers and providers to users.
* Pluggable architecture of Intake allows for many points to add and improve
* Open, simple code-base, come and get involved on `github`_!

.. _github: https://github.com/ContinuumIO/intake
.. _github: https://github.com/intake/intake


First steps
Expand Down
2 changes: 1 addition & 1 deletion docs/source/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,5 +88,5 @@ Future Directions
-----------------

Ongoing work for enhancements, as well as requests for plugins, etc., can be found at the
`issue tracker <https://github.com/ContinuumIO/intake/issues>`_. See the :ref:`roadmap`
`issue tracker <https://github.com/intake/intake/issues>`_. See the :ref:`roadmap`
for general mid- and long-term goals.
40 changes: 20 additions & 20 deletions docs/source/plugin-directory.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,36 +8,36 @@ contains in parentheses:

* builtin to Intake (``catalog``, ``csv``, ``intake_remote``, ``ndzarr``,
``numpy``, ``textfiles``, ``yaml_file_cat``, ``yaml_files_cat``)
* `intake-astro <https://github.com/ContinuumIO/intake-astro>`_ Table and array loading of FITS astronomical data (``fits_array``, ``fits_table``)
* `intake-accumulo <https://github.com/ContinuumIO/intake-accumulo>`_ Apache Accumulo clustered data storage (``accumulo``)
* `intake-avro <https://github.com/ContinuumIO/intake-avro>`_: Apache Avro data serialization format (``avro_table``, ``avro_sequence``
* `intake-astro <https://github.com/intake/intake-astro>`_ Table and array loading of FITS astronomical data (``fits_array``, ``fits_table``)
* `intake-accumulo <https://github.com/intake/intake-accumulo>`_ Apache Accumulo clustered data storage (``accumulo``)
* `intake-avro <https://github.com/intake/intake-avro>`_: Apache Avro data serialization format (``avro_table``, ``avro_sequence``
* `intake-cmip <https://github.com/NCAR/intake-cmip>`_: load `CMIP <https://cmip.llnl.gov/>`_ (Coupled Model Intercomparison Project) data (``cmip5``)
* `intake-bluesky <https://nsls-ii.github.io/intake-bluesky/>`_: search and retrieve data in the `bluesky <https://nsls-ii.github.io/bluesky>`_ data model
* `intake-dynamodb <https://github.com/informatics-lab/intake-dynamodb>`_ link to Amazon DynamoDB (``dynamodb``)
* `intake-elasticsearch <https://github.com/ContinuumIO/intake-elasticsearch>`_: Elasticsearch search and analytics engine (``elasticsearch_seq``, ``elasticsearch_table``)
* `intake-elasticsearch <https://github.com/intake/intake-elasticsearch>`_: Elasticsearch search and analytics engine (``elasticsearch_seq``, ``elasticsearch_table``)
* `intake-geopandas <https://github.com/informatics-lab/intake_geopandas>`_: load ESRI Shape Files with geopandas (``shape``)
* `intake-hbase <https://github.com/ContinuumIO/intake-hbase>`_: Apache HBase database (``hbase``)
* `intake-hbase <https://github.com/intake/intake-hbase>`_: Apache HBase database (``hbase``)
* `intake-iris <https://github.com/informatics-lab/intake-iris>`_ load netCDF and GRIB files with IRIS (``grib``, ``netcdf``)
* `intake-mongo <https://github.com/ContinuumIO/intake-mongo>`_: MongoDB noSQL query (``mongo``)
* `intake-netflow <https://github.com/ContinuumIO/intake-netflow>`_: Netflow packet format (``netflow``)
* `intake-odbc <https://github.com/ContinuumIO/intake-odbc>`_: ODBC database (``odbc``)
* `intake-parquet <https://github.com/ContinuumIO/intake-parquet>`_: Apache Parquet file format (``parquet``)
* `intake-pcap <https://github.com/ContinuumIO/intake-pcap>`_: PCAP network packet format (``pcap``)
* `intake-postgres <https://github.com/ContinuumIO/intake-postgres>`_: PostgreSQL database (``postgres``)
* `intake-mongo <https://github.com/intake/intake-mongo>`_: MongoDB noSQL query (``mongo``)
* `intake-netflow <https://github.com/intake/intake-netflow>`_: Netflow packet format (``netflow``)
* `intake-odbc <https://github.com/intake/intake-odbc>`_: ODBC database (``odbc``)
* `intake-parquet <https://github.com/intake/intake-parquet>`_: Apache Parquet file format (``parquet``)
* `intake-pcap <https://github.com/intake/intake-pcap>`_: PCAP network packet format (``pcap``)
* `intake-postgres <https://github.com/intake/intake-postgres>`_: PostgreSQL database (``postgres``)
* `intake-s3-manifests <https://github.com/informatics-lab/intake-s3-manifests>`_ (``s3_manifest``)
* `intake-solr <https://github.com/ContinuumIO/intake-solr>`_: Apache Solr search platform (``solr``)
* `intake-spark <https://github.com/ContinuumIO/intake-spark>`_: data processed by Apache Spark (``spark_cat``, ``spark_rdd``, ``spark_dataframe``)
* `intake-sql <https://github.com/ContinuumIO/intake-sql>`_: Generic SQL queries via SQLAlchemy (``sql_cat``, ``sql``, ``sql_auto``, ``sql_manual``)
* `intake-splunk <https://github.com/ContinuumIO/intake-splunk>`_: Splunk machine data query (``splunk``)
* `intake-xarray <https://github.com/ContinuumIO/intake-xarray>`_: load netCDF, Zarr and other multi-dimensional data (``xarray_image``, ``netcdf``, ``opendap``,
* `intake-solr <https://github.com/intake/intake-solr>`_: Apache Solr search platform (``solr``)
* `intake-spark <https://github.com/intake/intake-spark>`_: data processed by Apache Spark (``spark_cat``, ``spark_rdd``, ``spark_dataframe``)
* `intake-sql <https://github.com/intake/intake-sql>`_: Generic SQL queries via SQLAlchemy (``sql_cat``, ``sql``, ``sql_auto``, ``sql_manual``)
* `intake-splunk <https://github.com/intake/intake-splunk>`_: Splunk machine data query (``splunk``)
* `intake-xarray <https://github.com/intake/intake-xarray>`_: load netCDF, Zarr and other multi-dimensional data (``xarray_image``, ``netcdf``, ``opendap``,
``rasterio``, ``remote-xarray``, ``zarr``)

The status of these projects is available at `Status Dashboard <https://continuumio.github.io/intake-dashboard/status.html>`_.
The status of these projects is available at `Status Dashboard <https://intake.github.io/status/>`_.

Don't see your favorite format? See :doc:`making-plugins` for how to create new plugins.

Note that if you want your plugin listed here, open an issue in the `Intake
issue repository <https://github.com/ContinuumIO/intake>`_ and add an entry to the
`status dashboard repository <https://github.com/ContinuumIO/intake-dashboard>`_. We also have a
`plugin wishlist Github issue <https://github.com/ContinuumIO/intake/issues/58>`_
issue repository <https://github.com/intake/intake>`_ and add an entry to the
`status dashboard repository <https://github.com/intake/intake-dashboard>`_. We also have a
`plugin wishlist Github issue <https://github.com/intake/intake/issues/58>`_
that shows the breadth of plugins we hope to see for Intake.
4 changes: 2 additions & 2 deletions docs/source/roadmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Next-generation GUI

The jupyter-widgets GUI is useful and simple, but we can do better. See the `long form proposal`_.

.. _long form proposal: https://github.com/ContinuumIO/intake/issues/225
.. _long form proposal: https://github.com/intake/intake/issues/225


Catalog services
Expand All @@ -58,7 +58,7 @@ Catalog services
We are experimenting with reflecting external catalog-like data servers as Intake catalogs, so that the
familiar API can be used for all the disparate services. See for example `this discussion`_.

.. _this discussion: https://github.com/ContinuumIO/intake/issues/224
.. _this discussion: https://github.com/intake/intake/issues/224

Use DAT as a cache service
--------------------------
Expand Down
1 change: 1 addition & 0 deletions intake/source/tests/test_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,7 @@ def test_compressed_cache_bad(temp_cache):
intake.config.conf['cache_download_progress'] = old


@pytest.mark.xfail
def test_dat(temp_cache):
import subprocess
try:
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
description='Data load and catalog system',
url='https://github.com/ContinuumIO/intake',
url='https://github.com/intake/intake',
maintainer='Martin Durant',
maintainer_email='mdurant@anaconda.com',
license='BSD',
Expand Down
6 changes: 3 additions & 3 deletions templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,19 @@ To use these templates, install cookiecutter:
```
conda install -c defaults -c conda-forge cookiecutter
```
or
or
```
pip install cookiecutter
```

For a new plugin:
```
cookiecutter gh:ContinuumIO/intake/templates/plugin
cookiecutter gh:intake/intake/templates/plugin
```

And for a new conda data package:
```
cookiecutter gh:ContinuumIO/intake/templates/data_package
cookiecutter gh:intake/intake/templates/data_package
```

The template will prompt for parameters.

0 comments on commit 4f28e41

Please sign in to comment.