Skip to content

Commit

Permalink
updated crash version to 0.11.1
Browse files Browse the repository at this point in the history
  • Loading branch information
chaudum committed Feb 4, 2015
1 parent a9106f7 commit dfb22dc
Show file tree
Hide file tree
Showing 19 changed files with 193 additions and 141 deletions.
21 changes: 21 additions & 0 deletions CHANGES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,27 @@ Changes for Crate
Unreleased
==========

- Updated crash version to 0.11.1 which contains following changes:

- added ``--format`` command line option
to support different response output formats such as
``tabular``, ``raw``, ``json``, ``csv`` and ``mixed``

- BREAKING CHANGE
the ``CONNECT <host>`` client command was changed to ``\connect <host>``
see documentation for further details

- alternative cli implementation using prompt_toolkit

- added coloured printing when in interactive shell

- Fix: Filtering on a routing column sometimes didn't work if the value was an
empty string

- Fix: Bulk inserts with mixed but compatible types (e.g. int and long) failed

- Fix: Force UTF8 encoding in file reading collector to avoid JVM's default encoding settings

- Added support for the ``std_dev``, ``variance`` and ``geometric_mean``
aggregation functions

Expand Down
2 changes: 1 addition & 1 deletion app/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ distZip {
ext {
downloadDir = new File(buildDir, 'downloads')
plugin_crateadmin_version = '0.11.2'
crash_version = '0.10.3'
crash_version = '0.11.1'
}

evaluationDependsOn(':es')
Expand Down
12 changes: 6 additions & 6 deletions docs/best_practice/cluster_upgrade.txt
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ Use the :ref:`ref-set` command to do so:
.. code-block:: psql

cr> SET GLOBAL TRANSIENT cluster.routing.allocation.enable = 'new_primaries';
SET OK (... sec)
SET OK, ...

.. note::

Expand Down Expand Up @@ -217,11 +217,11 @@ look:
... FROM information_schema.tables
... WHERE number_of_replicas = 0 and schema_name in ('blob', 'doc')
... ORDER BY schema, "table" ;
+--------+-------------------+
| schema | table |
+--------+-------------------+
+--------+------...+
| schema | table...|
+--------+------...+
...
+--------+-------------------+
+--------+------...+
SELECT ... rows in set (... sec)


Expand Down Expand Up @@ -277,7 +277,7 @@ allocations again that have been disabled in the first step:
.. code-block:: psql

cr> SET GLOBAL TRANSIENT cluster.routing.allocation.enable = 'all';
SET OK (... sec)
SET OK, ...


.. warning::
Expand Down
30 changes: 15 additions & 15 deletions docs/best_practice/data_import.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,12 @@ By default you would probably create the table like that::
... country STRING
... )
... );
CREATE OK (... sec)
CREATE OK, ...

.. hide:

cr> DROP TABLE user;
DROP OK (... sec)
DROP OK, ...

Well, there's nothing wrong with that and does its job, but it is not very
performant either and therefore not what we want to use in a real world
Expand Down Expand Up @@ -102,7 +102,7 @@ The ``CREATE TABLE`` statement now looks like::
... )
... ) CLUSTERED INTO 12 shards
... WITH (number_of_replicas = 0);
CREATE OK (... sec)
CREATE OK, ...

.. seealso::

Expand All @@ -121,12 +121,12 @@ minimise the overhead during the import.
::

cr> ALTER TABLE user SET (refresh_interval = 0);
ALTER OK (... sec)
ALTER OK, ...

.. hide:

cr> DROP TABLE user;
DROP OK (... sec)
DROP OK, ...

It is also possible to set the refresh interval already
in the ``CREATE TABLE`` statement::
Expand All @@ -145,14 +145,14 @@ in the ``CREATE TABLE`` statement::
... number_of_replicas = 0,
... refresh_interval = 0
... );
CREATE OK (... sec)
CREATE OK, ...


Once the import is finished you can set the refresh interval to
a reasonable value (time in ms)::

cr> ALTER TABLE user SET (refresh_interval = 1000);
ALTER OK (... sec)
ALTER OK, ...

.. seealso::

Expand All @@ -172,14 +172,14 @@ completely by setting the ``indices.store.throttle.type`` to ``none``.
::

cr> SET GLOBAL TRANSIENT indices.store.throttle.type = 'none';
SET OK (... sec)
SET OK, ...

However if you still want to throttle the merging of segments during import
you can increase the maximum bytes per second from its default of ``20mb``
to something like 100-200mb/s for SSD disks::

cr> SET GLOBAL TRANSIENT indices.store.throttle.max_bytes_per_sec = '150mb';
SET OK (... sec)
SET OK, ...


After import you should not forget to turn throttling on again by setting its
Expand All @@ -188,7 +188,7 @@ value to ``merge`` (default) or ``all``.
::

cr> SET GLOBAL TRANSIENT indices.store.throttle.type = 'merge';
SET OK (... sec)
SET OK, ...

.. seealso::

Expand Down Expand Up @@ -226,7 +226,7 @@ COPY FROM Command
.. hide:

cr> REFRESH TABLE user;
REFRESH OK (... sec)
REFRESH OK, ...

.. note::

Expand All @@ -252,7 +252,7 @@ For example::
.. hide:

cr> REFRESH TABLE user;
REFRESH OK (... sec)
REFRESH OK, ...

In our example it will not make a difference, but if you have a more complex
data set with a lot of columns and large values, it probably makes sense to
Expand All @@ -279,7 +279,7 @@ For example::
.. hide:

cr> REFRESH TABLE user;
REFRESH OK (... sec)
REFRESH OK, ...

Partitioned Tables
------------------
Expand All @@ -299,7 +299,7 @@ or the other column must lose its contraint.
.. hide:

cr> DROP TABLE user;
DROP OK (... sec)
DROP OK, ...

::

Expand All @@ -315,7 +315,7 @@ or the other column must lose its contraint.
... ) CLUSTERED INTO 6 shards
... PARTITIONED BY (day_joined)
... WITH (number_of_replicas = 0);
CREATE OK (... sec)
CREATE OK, ...

To import data into partitioned tables efficiently you will have to import
each table partition separately. Since the value of the table partition is not
Expand Down
6 changes: 3 additions & 3 deletions docs/best_practice/migrating_from_mongodb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ A basic CREATE TABLE statement looks as follows::
... name string,
... obj object (dynamic)
... ) clustered into 5 shards with (number_of_replicas = 0);
CREATE OK (... sec)
CREATE OK, ...

In Crate each field is indexed by default. So it is not necessary to create any
additional indices.
Expand All @@ -61,7 +61,7 @@ off::
... obj object (dynamic),
... dummy string INDEX OFF
... ) clustered into 5 shards with (number_of_replicas = 0);
CREATE OK (... sec)
CREATE OK, ...

For fields that contain text consider using a full-text analyzer. This will
enable great full-text search capabilities. See :ref:`indices_and_fulltext` for
Expand All @@ -78,7 +78,7 @@ the table and also insert arbitrary objects into the obj column::
INSERT OK, 1 row affected (... sec)

cr> refresh table mytable2;
REFRESH OK (... sec)
REFRESH OK, ...

.. Hidden: wait for schema update so that newcol is available

Expand Down
10 changes: 5 additions & 5 deletions docs/blob.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Before adding blobs a ``blob table`` must be created. Lets use the
crate shell ``crash`` to issue the SQL statement::

sh$ crash -c "create blob table myblobs clustered into 3 shards with (number_of_replicas=1)"
CREATE OK (... sec)
CREATE OK, ...

Now crate is configured to allow blobs to be management under the
``/_blobs/myblobs`` endpoint.
Expand Down Expand Up @@ -60,7 +60,7 @@ See :ref:`ref-create-blob-table` for details.
Creating a blob table with a custom blob data path::

sh$ crash -c "create blob table myblobs clustered into 3 shards with (blobs_path='/tmp/crate_blob_data')" # doctest: +SKIP
CREATE OK (... sec)
CREATE OK, ...


Altering a blob table
Expand All @@ -70,7 +70,7 @@ The number of replicas a blob table has can be changed using the ``ALTER BLOB
TABLE`` clause::

sh$ crash -c "alter blob table myblobs set (number_of_replicas=0)"
ALTER OK (... sec)
ALTER OK, ...

Uploading
=========
Expand Down Expand Up @@ -173,12 +173,12 @@ Deleting a blob table
Blob tables can be deleted similar to normal tables (again using the crate shell here)::

sh$ crash -c "drop blob table myblobs"
DROP OK (... sec)
DROP OK, ...

.. Hidden: Re-create the blob table so information_schema will show it::

sh$ crash -c "create blob table myblobs clustered into 3 shards with (number_of_replicas=1)"
CREATE OK (... sec)
CREATE OK, ...


.. _`binary large objects`: http://en.wikipedia.org/wiki/Binary_large_object
4 changes: 2 additions & 2 deletions docs/hello.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ the Crate distribution.

First let's connect to a running node::

cr> connect 127.0.0.1:4200;
cr> \connect 127.0.0.1:4200;
+------------------------+-----------+---------+-----------+---------+
| server_url | node_name | version | connected | message |
+------------------------+-----------+---------+-----------+---------+
Expand All @@ -34,7 +34,7 @@ the table ``tweets`` with all columns we need::
... text string INDEX using fulltext,
... user_id string
... );
CREATE OK (... sec)
CREATE OK, ...

Now we are ready to insert our first tweet::

Expand Down
5 changes: 2 additions & 3 deletions docs/sql/aggregation.txt
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ This aggregation function simply returns the number of rows that match the query

`count(columName)` is also possible, but currently only works on a primary key column.
The semantics are the same.
The return value is always of type ``long``.
::
The return value is always of type ``long``::

cr> select count(*) from locations;
+----------+
Expand Down Expand Up @@ -375,4 +374,4 @@ do.

.. _Geometric Mean: https://en.wikipedia.org/wiki/Mean#Geometric_mean_.28GM.29
.. _Variance: https://en.wikipedia.org/wiki/Variance
.. _Standard Deviation: https://en.wikipedia.org/wiki/Standard_deviation
.. _Standard Deviation: https://en.wikipedia.org/wiki/Standard_deviation

0 comments on commit dfb22dc

Please sign in to comment.