Skip to content


Subversion checkout URL

You can clone with
Download ZIP
A rich Ruby API and DSL for the ElasticSearch search engine/database
Pull request Compare This branch is even with karmi:activerecord.

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.



Tire is a Ruby (1.8 or 1.9) client for the ElasticSearch search engine/database.

ElasticSearch is a scalable, distributed, cloud-ready, highly-available, full-text search engine and database with powerfull aggregation features, communicating by JSON over RESTful HTTP, based on Lucene, written in Java.

This Readme provides a brief overview of Tire's features. The more detailed documentation is at

Both of these documents contain a lot of information. Please set aside some time to read them thoroughly, before you blindly dive into „somehow making it work“. Just skimming through it won't work for you. For more information, please refer to the integration test suite and issues.


OK. First, you need a running ElasticSearch server. Thankfully, it's easy. Let's define easy:

$ curl -k -L -o elasticsearch-0.17.2.tar.gz
$ tar -zxvf elasticsearch-0.17.2.tar.gz
$ ./elasticsearch-0.17.2/bin/elasticsearch -f

See, easy. On a Mac, you can also use Homebrew:

$ brew install elasticsearch

Now, let's install the gem via Rubygems:

$ gem install tire

Of course, you can install it from the source as well:

$ git clone git://
$ cd tire
$ rake install


Tire exposes easy-to-use domain specific language for fluent communication with ElasticSearch.

It also blends with your ActiveModel classes for convenient usage in Rails applications.

To test-drive the core ElasticSearch functionality, let's require the gem:

    require 'rubygems'
    require 'tire'

Please note that you can copy these snippets from the much more extensive and heavily annotated file in examples/tire-dsl.rb.

Also, note that we're doing some heavy JSON lifting here. Tire uses the multi_json gem as a generic JSON wrapper, which allows you to use your preferred JSON library. We'll use the yajl-ruby gem in the full on mode here:

    require 'yajl/json_gem'

Let's create an index named articles and store/index some documents:

    Tire.index 'articles' do

      store :title => 'One',   :tags => ['ruby']
      store :title => 'Two',   :tags => ['ruby', 'python']
      store :title => 'Three', :tags => ['java']
      store :title => 'Four',  :tags => ['ruby', 'php']


We can also create the index with custom mapping for a specific document type:

    Tire.index 'articles' do

      create :mappings => {
        :article => {
          :properties => {
            :id       => { :type => 'string', :index => 'not_analyzed', :include_in_all => false },
            :title    => { :type => 'string', :boost => 2.0,            :analyzer => 'snowball'  },
            :tags     => { :type => 'string', :analyzer => 'keyword'                             },
            :content  => { :type => 'string', :analyzer => 'snowball'                            }

Of course, we may have large amounts of data, and it may be impossible or impractical to add them to the index one by one. We can use ElasticSearch's bulk storage. Notice, that collection items must have an id property or method, and should have a type property, if you've set any specific mapping for the index.

    articles = [
      { :id => '1', :type => 'article', :title => 'one',   :tags => ['ruby']           },
      { :id => '2', :type => 'article', :title => 'two',   :tags => ['ruby', 'python'] },
      { :id => '3', :type => 'article', :title => 'three', :tags => ['java']           },
      { :id => '4', :type => 'article', :title => 'four',  :tags => ['ruby', 'php']    }

    Tire.index 'articles' do
      import articles

We can easily manipulate the documents before storing them in the index, by passing a block to the import method, like this:

    Tire.index 'articles' do
      import articles do |documents|

        documents.each { |document| document[:title].capitalize! }


OK. Now, let's go search all the data.

We will be searching for articles whose title begins with letter “T”, sorted by title in descending order, filtering them for ones tagged “ruby”, and also retrieving some facets from the database:

    s = 'articles' do
      query do
        string 'title:T*'

      filter :terms, :tags => ['ruby']

      sort { by :title, 'desc' }

      facet 'global-tags' do
        terms :tags, :global => true

      facet 'current-tags' do
        terms :tags

(Of course, we may also page the results with from and size query options, retrieve only specific fields or highlight content matching our query, etc.)

Let's display the results:

    s.results.each do |document|
      puts "* #{ document.title } [tags: #{document.tags.join(', ')}]"

    # * Two [tags: ruby, python]

Let's display the global facets (distribution of tags across the whole database):

    s.results.facets['global-tags']['terms'].each do |f|
      puts "#{f['term'].ljust(10)} #{f['count']}"

    # ruby       3
    # python     1
    # php        1
    # java       1

Now, let's display the facets based on current query (notice that count for articles tagged with 'java' is included, even though it's not returned by our query; count for articles tagged 'php' is excluded, since they don't match the current query):

    s.results.facets['current-tags']['terms'].each do |f|
      puts "#{f['term'].ljust(10)} #{f['count']}"

    # ruby       1
    # python     1
    # java       1

Notice, that only variables from the enclosing scope are accessible. If we want to access the variables or methods from outer scope, we have to use a slight variation of the DSL, by passing the search and query objects around.

    @query = 'title:T*' 'articles' do |search|
      search.query do |query|
        query.string @query

Quite often, we need complex queries with boolean logic. Instead of composing long query strings such as tags:ruby OR tags:java AND NOT tags:python, we can use the bool query. In Tire, we build them declaratively. 'articles' do
      query do
        boolean do
          should   { string 'tags:ruby' }
          should   { string 'tags:java' }
          must_not { string 'tags:python' }

The best thing about boolean queries is that we can easily save these partial queries as Ruby blocks, to mix and reuse them later. So, we may define a query for the tags property:

    tags_query = lambda do
      boolean.should { string 'tags:ruby' }
      boolean.should { string 'tags:java' }

And a query for the published_on property:

    published_on_query = lambda do
      boolean.must   { string 'published_on:[2011-01-01 TO 2011-01-02]' }

Now, we can combine these queries for different searches: 'articles' do
      query do
        boolean &tags_query
        boolean &published_on_query

Note, that you can pass options for configuring queries, facets, etc. by passing a Hash as the last argument to the method call: 'articles' do
      query do
        string 'ruby python', :default_operator => 'AND', :use_dis_max => true

If configuring the search payload with blocks feels somehow too weak for you, you can pass a plain old Ruby Hash (or JSON string) with the query declaration to the search method: 'articles', :query => { :fuzzy => { :title => 'Sour' } }

If this sounds like a great idea to you, you are probably able to write your application using just curl, sed and awk.

For debugging purposes, we can display the full query JSON for close inspection:

    puts s.to_json
    # {"facets":{"current-tags":{"terms":{"field":"tags"}},"global-tags":{"global":true,"terms":{"field":"tags"}}},"query":{"query_string":{"query":"title:T*"}},"filter":{"terms":{"tags":["ruby"]}},"sort":[{"title":"desc"}]}

Or, better, we can display the corresponding curl command to recreate and debug the request in the terminal:

    puts s.to_curl
    # curl -X POST "http://localhost:9200/articles/_search?pretty=true" -d '{"facets":{"current-tags":{"terms":{"field":"tags"}},"global-tags":{"global":true,"terms":{"field":"tags"}}},"query":{"query_string":{"query":"title:T*"}},"filter":{"terms":{"tags":["ruby"]}},"sort":[{"title":"desc"}]}'

However, we can simply log every search query (and other requests) in this curl-friendly format:

    Tire.configure { logger 'elasticsearch.log' }

When you set the log level to debug:

    Tire.configure { logger 'elasticsearch.log', :level => 'debug' }

the JSON responses are logged as well. This is not a great idea for production environment, but it's priceless when you want to paste a complicated transaction to the mailing list or IRC channel.

The Tire DSL tries hard to provide a strong Ruby-like API for the main ElasticSearch features.

By default, Tire wraps the results collection in a enumerable Results::Collection class, and result items in a Results::Item class, which looks like a child of Hash and Openstruct, for smooth iterating over and displaying the results.

You may wrap the result items in your own class by setting the Tire.configuration.wrapper property. Your class must take a Hash of attributes on initialization.

If that seems like a great idea to you, there's a big chance you already have such class.

One would bet it's an ActiveRecord or ActiveModel class, containing model of your Rails application.

Fortunately, Tire makes blending ElasticSearch features into your models trivially possible.

ActiveModel Integration

NOTE: Please note that the ActiveModel/ActiveRecord integration will change considerably in the next release (for the better). You can read it up in the Readme on the activerecord branch. The reasoning for this change can be found at the tire#12 issue.

If you're the type with no time for lengthy introductions, you can generate a fully working example Rails application, with an ActiveRecord model and a search form, to play with (it even downloads ElasticSearch itself, generates the application skeleton and leaves you with a Git repository to explore the steps and the code):

$ rails new searchapp -m

For the rest, let's suppose you have an Article class in your Rails application. To make it searchable with Tire, just include it:

    class Article < ActiveRecord::Base
      include Tire::Model::Search
      include Tire::Model::Callbacks

When you now save a record:

    Article.create :title =>   "I Love ElasticSearch",
                   :content => "...",
                   :author =>  "Captain Nemo",
                   :published_on =>

it is automatically added into the index, because of the included callbacks. (You may want to skip them in special cases, like when your records are indexed via some external mechanism, let's say CouchDB or RabbitMQ river for ElasticSearch.)

The document attributes are indexed exactly as when you call the Article#to_json method.

Now you can search the records: 'love'

OK. This is where the search game stops, often. Not here.

First of all, you may use the full query DSL, as explained above, with filters, sorting, advanced facet aggregation, highlighting, etc: do
      query             { string 'love' }
      facet('timeline') { date   :published_on, :interval => 'month' }
      sort              { by     :published_on, 'desc' }

Dynamic mapping is a godsend when you're prototyping. For serious usage, though, you'll definitely want to define a custom mapping for your model:

    class Article < ActiveRecord::Base
      include Tire::Model::Search
      include Tire::Model::Callbacks

      mapping do
        indexes :id,           :type => 'string',  :index    => :not_analyzed
        indexes :title,        :type => 'string',  :analyzer => 'snowball', :boost => 100
        indexes :content,      :type => 'string',  :analyzer => 'snowball'
        indexes :author,       :type => 'string',  :analyzer => 'keyword'
        indexes :published_on, :type => 'date',    :include_in_all => false

In this case, only the defined model attributes are indexed. The mapping declaration creates the index when the class is loaded or when the importing features are used, and only when it does not exist, yet. (It may well be reasonable to wrap the index creation logic in a class method of your model, so you have better control on index creation when bootstrapping your application or when setting up tests.)

When you want a tight grip on how the attributes are added to the index, just provide the to_indexed_json method in your model:

    class Article < ActiveRecord::Base
      include Tire::Model::Search
      include Tire::Model::Callbacks

      def to_indexed_json
        names      = author.split(/\W/)
        last_name  = names.pop
        first_name = names.join

          :title   => title,
          :content => content,
          :author  => {
            :first_name => first_name,
            :last_name  => last_name


The results returned by are wrapped in the aforementioned Item class, by default. This way, we have a fast and flexible access to the properties returned from ElasticSearch (via the _source or fields JSON properties). This way, we can index whatever JSON we like in ElasticSearch, and retrieve it, simply, via the dot notation:

    articles = 'love'
    articles.each do |article|
      puts article.title

The Item instances masquerade themselves as instances of your model in Rails (based on the _type property retrieved from ElasticSearch), so you can use them carefree; all the url_for or dom_id helpers work as expected.

If you need to access the “real” model (eg. to access its assocations or methods not stored in ElasticSearch), just load it from the database:

    puts article.load(:include => 'comments').comments.size

You can see that Tire stays as far from the database as possible. That's because it believes you have most of the data you want to display stored in ElasticSearch. When you need to load the records from the database itself, for whatever reason, you can do it with the :load option:

    # Will call ` [1, 2, 3]` 'love', :load => true

Instead of simple true, you can pass any options for the model's find method:

    # Will call ` [1, 2, 3], :include => 'comments'` :load => { :include => 'comments' } do
      query { string 'love' }

Note that Tire search results are fully compatible with will_paginate, so you can pass all the usual parameters to the search method in the controller:

    @articles = params[:q], :page => (params[:page] || 1)

OK. Chances are, you have lots of records stored in your database. How will you get them to ElasticSearch? Easy:

    Article.elasticsearch_index.import Article.all

This way, however, all your records are loaded into memory, serialized into JSON, and sent down the wire to ElasticSearch. Not practical, you say? You're right.

Provided your model implements some sort of pagination — and it probably does —, you can just run:


In this case, the Article.paginate method is called, and your records are sent to the index in chunks of 1000. If that number doesn't suit you, just provide a better one:

    Article.import :per_page => 100

Any other parameters you provide to the import method are passed down to the paginate method.

Are we saying you have to fiddle with this thing in a rails console or silly Ruby scripts? No. Just call the included Rake task on the commandline:

    $ rake environment tire:import CLASS='Article'

You can also force-import the data by deleting the index first (and creating it with mapping provided by the mapping block in your model):

    $ rake environment tire:import CLASS='Article' FORCE=true

When you'll spend more time with ElasticSearch, you'll notice how index aliases are the best idea since the invention of inverted index. You can index your data into a fresh index (and possibly update an alias if everything's fine):

    $ rake environment tire:import CLASS='Article' INDEX='articles-2011-05'

OK. All this time we have been talking about ActiveRecord models, since it is a reasonable Rails' default for the storage layer.

But what if you use another database such as MongoDB, another object mapping library, such as Mongoid?

Well, things stay mostly the same:

    class Article
      include Mongoid::Document
      field :title, :type => String
      field :content, :type => String

      include Tire::Model::Search
      include Tire::Model::Callbacks

      # Let's use a different index name so stuff doesn't get mixed up
      index_name 'mongo-articles'

      # These Mongo guys sure do some funky stuff with their IDs
      # in +serializable_hash+, let's fix it.
      def to_indexed_json


    Article.create :title => 'I Love ElasticSearch' 'love'

That's kinda nice. But there's more.

Tire implements not only searchable features, but also persistence features. This means you can use a Tire model instead of your database, not just for searching your database. Why would you like to do that?

Well, because you're tired of database migrations and lots of hand-holding with your database to store stuff like { :name => 'Tire', :tags => [ 'ruby', 'search' ] }. Because what you need is to just dump a JSON-representation of your data into a database and load it back when needed. Because you've noticed that searching your data is a much more effective way of retrieval then constructing elaborate database query conditions. Because you have lots of data and want to use ElasticSearch's advanced distributed features.

To use the persistence features, just include the Tire::Persistence module in your class and define the properties (like with CouchDB- or MongoDB-based models):

    class Article
      include Tire::Model::Persistence
      include Tire::Model::Search
      include Tire::Model::Callbacks

      validates_presence_of :title, :author

      property :title
      property :author
      property :content
      property :published_on

Of course, not all validations or ActionPack helpers will be available to your models, but if you can live with that, you've just got a schema-free, highly-scalable storage and retrieval engine for your data.

Please be sure to peruse the integration test suite for examples of the API and ActiveModel integration usage.

Todo, Plans & Ideas

Tire is already used in production by its authors. Nevertheless, it's not considered finished yet.

There are todos, plans and ideas, some of which are listed below, in the order of importance:

  • Wrap all Tire functionality mixed into a model in a "forwardable" object, and proxy everything via this object. (The immediate problem: Mongoid)
  • If we're not stepping on other's toes, bring Tire methods like index, search, mapping also to the class/instance top-level namespace.
  • Proper RDoc annotations for the source code
  • Histogram facets
  • Statistical facets
  • Geo Distance facets
  • Index aliases management
  • Analyze API support
  • Embedded webserver to display statistics and to allow easy searches

Other Clients

Check out other ElasticSearch clients.


You can send feedback via e-mail or via Github Issues.

Karel Minarik and contributors

Something went wrong with that request. Please try again.