New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
norms.enabled/omit_norms serialization and parsing are inconsistent #4760
Comments
ghost
assigned jpountz
Jan 16, 2014
jpountz
added a commit
to jpountz/elasticsearch
that referenced
this issue
Jan 20, 2014
…fields. StringFieldMapper.toXContent uses the defaults for analyzed fields in order to know which options to add to the builder. This means that if the field is not analyzed and has norms enabled, it will omit to emit `norms.enabled: true`. Parsing the mapping again will result in a StringFieldMapper that has norms disabled. The same fix applies to index options. Close elastic#4760
bleskes
added a commit
to bleskes/elasticsearch
that referenced
this issue
Nov 9, 2014
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at elastic#4760. This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this called the _default_ mapping to be parsed with every cluster state update.
bleskes
added a commit
that referenced
this issue
Nov 10, 2014
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at #4760. This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update. Closes #8413
bleskes
added a commit
that referenced
this issue
Nov 10, 2014
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at #4760. This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update. Closes #8413
bleskes
added a commit
to bleskes/elasticsearch
that referenced
this issue
Dec 10, 2014
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at elastic#4760. This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update. Closes elastic#8413
mute
pushed a commit
to mute/elasticsearch
that referenced
this issue
Jul 29, 2015
When data nodes receive mapping updates from the master, the parse it and merge it into their own in memory representation (if there). If this results in different bytes then the master sent, the nodes will send a refresh-mapping command to indicate to the master that it's byte level storage of the mapping should be refreshed via the document mappers. This comes handy when the mapping format has changed, in a backwards compatible manner, and we want to make sure we can still rely on the bytes to identify changes. An example of such a change can be seen at elastic#4760. This commit extends the logic to include the `_default_` type, which was never refreshed before. In some unlucky scenarios, this caused the _default_ mapping to be parsed with every cluster state update. Closes elastic#8413
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We compare the field type against the default one in
toXContent
in order to only serialize changes from the default field type, which hasnorms.enabled: true
.However we also have some logic to omit norms in case norms have not been configured and the field is
not_analyzed
.This means that if you configure a field to be
not_analyzed
and have norms enabled:it will be parsed correctly, but if you serialize it with toXContent, you will get:
norms.enabled
are missing because they are the same as in the default field type. So parsing it again would return a field which has norms disabled.The same is true for
index_options
(docs, positions and offsets by default, docs_only in case ofnot_analyzed
fields) but this is less of an issue given that it doesn't make sense to index offsets on a not_analyzed field.The text was updated successfully, but these errors were encountered: