Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapping not working #11

Closed
clintongormley opened this issue Feb 14, 2010 · 6 comments
Closed

Mapping not working #11

clintongormley opened this issue Feb 14, 2010 · 6 comments
Labels
>docs General docs changes

Comments

@clintongormley
Copy link
Contributor

Hiya

Following the examples in your docs, create-mapping does not seem to work, eg:

curl -XPUT http://localhost:9200/twitter/tweet -d '
{
    tweet : {
        properties : {
            message : {type : "string", store : "yes"}
        }
    }
}
'

No handler found for uri [/twitter/tweet] and method [PUT]

I tried creating the index first, but same thing.

Also, the JSON format for specifying the mapping type to use when indexing a document is ambiguous, eg:

curl -XPUT http://localhost:9200/twitter/tweet/1 -d \
'
{
     tweet : {
        user : "kimchy",
        postDate : "2009-11-15T14:12:12",
        message : "trying out Elastic Search"
    }
  }
  '

Does that mean that the document has mapping type 'tweet', or that there is no mapping type specified, and it has a single top level key called 'tweet'.

And one thing i'm not sure about? Is a mapping the same thing as a type? So you would never have type 'foo' and mapping 'bar'?

thanks

Clint

@kimchy
Copy link
Member

kimchy commented Feb 15, 2010

There is a bug in the docs (which I just fixed), the url for it should be http://localhost:9200/twitter/tweet/_mapping. Here is the example:

curl -XPUT http://localhost:9200/twitter/tweet/_mapping -d '
{
    tweet : {
        properties : {
            message : {type : "string", store : "yes"}
        }
    }
}
'

I tried creating the index first, but same thing.

You first need to create the index explicitly to add mappings. I will add another issue so the index will be created automatically in this case.

Also, the JSON format for specifying the mapping type to use when indexing a document is ambiguous

Not sure that I understand why its ambiguous? The indexable content can have the type as the first level JSON field, but its optional (since the type already exists in the url and I can derive it from that).

And one thing i'm not sure about? Is a mapping the same thing as a type? So you would never have type 'foo' and mapping 'bar'?

Mappings are basically meta data on how to map the indexable JSON content of a type into the search engine. You can have more than one type, and each type can optionally have mapping defined for it.

@kimchy
Copy link
Member

kimchy commented Feb 15, 2010

Create mapping now will automatically create the indices by default, see #12.

@clintongormley
Copy link
Contributor Author

OK, so there is one mapping per index+type. Gotcha.

The reason that including the mapping name in the JSON is ambiguous is because of object type mappings. Without a predefined mapping, you don't know if the top level 'tweet' key is a mapping name, or the first key in the object being stored.

So either you have to always specify the mapping, so the second case would look like:

curl -XPUT http://localhost:9200/twitter/tweet/1 -d \
'
{
    tweet : {
        tweet : {
            user : "kimchy",
            postDate : "2009-11-15T14:12:12",
            message : "trying out Elastic Search"
        }
    }
}
'

or never specify the mapping in the JSON, and just use the 'type' from the URL as the mapping name (which would be my preference)

@kimchy
Copy link
Member

kimchy commented Feb 15, 2010

Yes, you are correct about the possible problem there. The reason I added the support for that is to simplify working json converters in different languages, which usually add the "type" as the outermost JSON object. Do you think it was a mistake?

@clintongormley
Copy link
Contributor Author

I think that having it as either-or is a mistake, yes.

Not sure which is the better interface though. My feeling (having just written a Perl interface to ElasticSearch) is that you have to generate the URL and the JSON anyway, and it looks more consistent to me to specify the type in the URL, and the data in the JSON.

But as I say, it makes little difference to me which version you settle on.

@kimchy
Copy link
Member

kimchy commented Feb 15, 2010

Let me think about it a bit more. The only case where it will break is if the case you noted, (JSON object with the type name, and another JSON object with the same type name) and I am not sure if people will ever generate a JSON like that. I like the ability to try and support both...

dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
s1monw added a commit that referenced this issue Jun 5, 2015
Move tests to JUnit

Closes #11.
Closes #14.
dadoonet added a commit that referenced this issue Jun 5, 2015
That said, we don't have any test yet :-)

Closes #11.
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 9, 2015
Closes #11.
(cherry picked from commit 8a87054)
rahulanishetty referenced this issue in rahulanishetty/elasticsearch Jan 22, 2017
NPE fix for in or, and and not filters
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
…stalled version if es_version_lock is set to true. Closes elastic#11
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
fcofdez pushed a commit to fcofdez/elasticsearch that referenced this issue Nov 19, 2021
ChrisHegarty pushed a commit that referenced this issue Aug 9, 2023
Fixes elastic/elasticsearch-internal#497
Fixes ESQL-560

A query like `from test | sort data | limit 2 | project count` fails
because `LocalToGlobalLimitAndTopNExec` planning rule adds a collecting
`TopNExec` after last GATHER exchange, to perform last reduce, see

```
TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]]
\_ExchangeExec[GATHER,SINGLE_DISTRIBUTION]
  \_ProjectExec[[count{f}#4]]      // <- `data` is projected away but still used by the TopN node above
    \_FieldExtractExec[count{f}#4]
      \_TopNExec[[Order[data{f}#6,ASC,LAST]],2[INTEGER]]
        \_FieldExtractExec[data{f}#6]
          \_ExchangeExec[REPARTITION,FIXED_ARBITRARY_DISTRIBUTION]
            \_EsQueryExec[test], query[][_doc_id{f}#9, _segment_id{f}#10, _shard_id{f}#11]
```

Unfortunately, at that stage the inputs needed by the TopNExec could
have been projected away by a ProjectExec, so they could be no longer
available.

This PR adapts the plan as follows:
- add all the projections used by the `TopNExec` to the existing
`ProjectExec`, so that they are available when needed
- add another ProjectExec on top of the plan, to project away the
originally removed projections and preserve the query semantics


This approach is a bit dangerous, because it bypasses the mechanism of
input/output resolution and validation that happens on the logical plan.
The alternative would be to do this manipulation on the logical plan,
but it's probably hard to do, because there is no concept of Exchange at
that level.
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this issue Oct 2, 2023
With this commit we embed Kibana dashboards for nightly and release
benchmarks. As we want to gather further feedback we did not yet remove
the old functionality based on dygraphs (this will happen in elastic#33).

Closes elastic#23
Closes elastic#9
Closes elastic#11
Closes elastic#12
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>docs General docs changes
Projects
None yet
Development

No branches or pull requests

2 participants