New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation error: Object Type mapping JSON #18
Labels
>docs
General docs changes
Comments
Fixed, thanks! |
dadoonet
added a commit
that referenced
this issue
Jun 5, 2015
Due to fix [3790](#3790) in core, upgrading an analyzer provided as a plugin now fails. See #4936 for details. Issue is in elasticsearch core code but can be fixed in plugins by overloading `PreBuiltTokenFilterFactoryFactory` and `PreBuiltAnalyzerProviderFactory`. Closes #18. (cherry picked from commit fc68d81)
dadoonet
added a commit
that referenced
this issue
Jun 5, 2015
dadoonet
added a commit
that referenced
this issue
Jun 5, 2015
dadoonet
added a commit
that referenced
this issue
Jun 5, 2015
Closes #18 (cherry picked from commit e2a98c9)
dadoonet
added a commit
that referenced
this issue
Jun 5, 2015
There's a much better documentation (including using existing OpenSSH keys like most of us already have for git) on the Azure docs, e.g.: http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/ Closes #18.
rmuir
pushed a commit
to rmuir/elasticsearch
that referenced
this issue
Nov 8, 2015
Original request: I am sending multiple pdf, word etc. attachments in one documents to be indexed. Some of them (pdf) are encrypted and I am getting a MapperParsingException caused by org.apache.tika.exception.TikaException: Unable to extract PDF content cause by org.apache.pdfbox.exceptions.WrappedIOException: Error decrypting document. I was wondering if the attachment mapper could expose some switch to ignore the documents it can not extract? As we now have option `ignore_errors`, we can support it. See elastic#38 relative to this option. Closes elastic#18.
ClaudioMFreitas
pushed a commit
to ClaudioMFreitas/elasticsearch-1
that referenced
this issue
Nov 12, 2019
[Testing] Make centos7 working
henningandersen
pushed a commit
to henningandersen/elasticsearch
that referenced
this issue
Jun 4, 2020
Add a new challenge `elasticlogs-continuous-index-and-query` suitable for long-running benchmarks. This commit also includes: * an updated version of the `deleteindex_runner.py` to help keep rolled-over indices to a defined size by deleting older ones. * updates to `README.md`. * a test/helper script tests/validate_challanges.py to assist with the JSON validation of challenges that contain embedded j2 DSL. Relates elastic#18
williamrandolph
pushed a commit
to williamrandolph/elasticsearch
that referenced
this issue
Jun 4, 2020
add merging option to Settings class wrap up move to Settings class through-out the project fixes elastic#10 fixes elastic#18
mindw
pushed a commit
to mindw/elasticsearch
that referenced
this issue
Sep 5, 2022
Deploy merger and recorder in cluster * Deploy merger and recorder in cluster Problem: We need to deploy the merger and recorder within our cluster Solution: Make the necessary changes to add two additional instances to the cluster that the terraform scripts deploy. Both of these use the 'dequeue' AMI and one of them runs the merger and the other runs the recorder. Approved-by: Gideon Avida
cbuescher
pushed a commit
to cbuescher/elasticsearch
that referenced
this issue
Oct 2, 2023
cbuescher
pushed a commit
to cbuescher/elasticsearch
that referenced
this issue
Oct 2, 2023
With this commit we analyze the provided fixtures to determine whether this is a "normal" release benchmark or a release benchmark that is running against an encrypted volume. To tell apart both, we set different tags in Rally. Relates elastic#18
This issue was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In
http://www.elasticsearch.com/docs/elasticsearch/mapping/object_type/
there are a few occurrences of
type = "object"
which should be
type: "object",
The text was updated successfully, but these errors were encountered: