Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Content-Type header [] is not supported #350

Closed
morgango opened this issue Sep 28, 2017 · 30 comments
Closed

Content-Type header [] is not supported #350

morgango opened this issue Sep 28, 2017 · 30 comments

Comments

@morgango
Copy link

In order to help us troubleshoot issues with this project, you must provide the following details:

I seem to be getting the error Content-Type header [] is not supported with Elastic 6 beta.

is there a command line argument I can add to include the content type? This isn't a problem specific to elasticdump, it is a change in Elasticsearch 6.

@morgango
Copy link
Author

morgango commented Sep 28, 2017

I answered my own question with a simple read of the documentation. I added:

--headers='{"Content-Type": "application/json"}' to the end of my command and it worked splendidly.

@cRUSHr2012
Copy link

cRUSHr2012 commented Oct 4, 2017

For me it's not working. I am using elasticdump to migrate some data from 5.5.1.1 (1 node) to 5.6.2.1 (3 nodes).
Elasticdump version is 3.3.1
On the destination cluster i have the follwing warning message:
[o.e.d.r.RestController ] Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header.
I've used tcpdump to confirm that there is no content type : application/json sent.
If I add --headers='{"Content-Type": "application/json"}' to the end of the command i get the error below (but i see content type : application/json header sent in tcpdump)
I'am receiving the error when it comes to --type=data
For --type=analyzer or --type=mapping, there are no issues. I used --limit=1 to limit the number of errors, and also tried using the tool without --type parameter.
elasticdump --input=http://elas-01:9200/categories-7 --output=http://es-01:9200/categories-7 --headers='{"Content-Type": "application/json"}' --limit=1

Wed, 04 Oct 2017 07:29:19 GMT | starting dump
Wed, 04 Oct 2017 07:29:19 GMT | got 1 objects from source elasticsearch (offset: 0)
Wed, 04 Oct 2017 07:29:19 GMT | sent 1 objects to destination elasticsearch, wrote 1
Wed, 04 Oct 2017 07:29:19 GMT | Error Emitted => {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Failed to parse request body"}],"type":"illegal_argument_exception","reason":"Failed to parse request body","caused_by":{"type":"json_parse_exception","reason":"Unrecognized token 'DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAF7ZFmJLc0JFdDFqVExpaWlTVzJhWlpwZWcAAAAAAABe2hZiS3NCRXQxalRMaWlpU1cyYVpacGVnAAAAAAAAXtsWYktzQkV0MWpUTGlpaVNXMmFaWnBlZwAAAAAAAF7dFmJLc0JFdDFqVExpaWlTVzJhWlpwZWcAAAAAAABe3BZiS3NCRXQxalRMaWlpU1cyYVpacGVn': was expecting ('true', 'false' or 'null')\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@63015fd9; line: 1, column: 457]"}},"status":400}
Wed, 04 Oct 2017 07:29:19 GMT | Total Writes: 1
Wed, 04 Oct 2017 07:29:19 GMT | dump ended with error (get phase) => Error: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Failed to parse request body"}],"type":"illegal_argument_exception","reason":"Failed to parse request body","caused_by":{"type":"json_parse_exception","reason":"Unrecognized token 'DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAF7ZFmJLc0JFdDFqVExpaWlTVzJhWlpwZWcAAAAAAABe2hZiS3NCRXQxalRMaWlpU1cyYVpacGVnAAAAAAAAXtsWYktzQkV0MWpUTGlpaVNXMmFaWnBlZwAAAAAAAF7dFmJLc0JFdDFqVExpaWlTVzJhWlpwZWcAAAAAAABe3BZiS3NCRXQxalRMaWlpU1cyYVpacGVn': was expecting ('true', 'false' or 'null')\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@63015fd9; line: 1, column: 457]"}},"status":400}

@cRUSHr2012
Copy link

cRUSHr2012 commented Oct 4, 2017

I've noticed something : the error appears after the first batch of elements.
If the batch is the default one (100 elements), the error appears before the second batch is processed. I can see all of the 100 documents on the destination cluster.
If I use a batch of 1 element, the error appears before the second one is processed.
A sample of exported (source) data with elasticdump:

{"_index":"categories-7","_type":"store_1","_id":"387","_score":1,"_source":{"parent_id":"154","path":"1/2/153/154/387","position":"1","level":"4","name":"Rame foto","thumbnail":"stsgt0abb_1.jpg","url_key":"rame-foto-clasice","display_mode":"PRODUCTS","request_path":"camere/rame/rame-foto","id":"387","status":"1","banners":["7","27"]}}
{"_index":"categories-7","_type":"store_1","_id":"166","_score":1,"_source":{"parent_id":"163","path":"1/2/153/157/163/166","position":"1","level":"5","name":"SD","thumbnail":"501.jpg","meta_description":"Carduri SD.","meta_title":"Carduri SD","meta_keywords":"carduri SD","url_key":"alte-accesorii","display_mode":"PRODUCTS","request_path":"camere-foto/accesorii-foto/carduri-memorie/alte-accesorii","id":"166","status":"1","banners":["7","27"]}}

@cRUSHr2012
Copy link

cRUSHr2012 commented Oct 4, 2017

Update : This does not happen when I am using the file method, only when I use elasticsearch instances as input and output.

@morgango
Copy link
Author

This is a required change for version 6, might make sense to have this be a default, I don't think it has a negative impact when working with older versions.

@fernandowiek
Copy link

fernandowiek commented Nov 22, 2017

@cRUSHr2012 i have the same issues that you had. Do you figured out a solution?
I tried this: elasticdump --input=http://elasticsearch.xxxx.xxxx:9200/logs-$date --output=/data/elasticsearch/index-$date.json --type=data --headers='{"Content-Type": "application/json"}'

@JelmenGuhlke
Copy link

@cRUSHr2012 @fernandowiek I have the same error after 100 documents. But also when I'm using the file method.. Is there any solution? I started using ES 6 from the beginning, so without any migration process

@JelmenGuhlke
Copy link

Oke, I answered my own post. For me adding --limit=2000 as parameter works fine!

@Amit-A
Copy link

Amit-A commented Nov 24, 2017

I'm getting:

Fri, 24 Nov 2017 03:51:17 GMT | starting dump
Fri, 24 Nov 2017 03:51:17 GMT | got 1 objects from source elasticsearch (offset: 0)
Fri, 24 Nov 2017 03:51:19 GMT | Error Emitted => "Content-Type header [application/octet-stream] is not supported"
Fri, 24 Nov 2017 03:51:19 GMT | Total Writes: 0
Fri, 24 Nov 2017 03:51:19 GMT | dump ended with error (set phase) => Content-Type header [application/octet-stream] is not supported

@JelmenGuhlke
Copy link

@Amit-A
Did you add --headers='{"Content-Type": "application/json"} ?

@fernandowiek
Copy link

@JelmenGuhlke Could you share your elasticdump command here? I added the --limit=2000 to mine and the same issue persist.

@ferronrsmith
Copy link
Collaborator

@cRUSHr2012 Your 🐛 seems to be related to #344

@cRUSHr2012
Copy link

@ferronrsmith I am not using Elasticsearch 6.x

@ferronrsmith
Copy link
Collaborator

@cRUSHr2012 yea I know, but I was having the same issue and it fixed it for me.

@maniankara
Copy link

Just an added note, might help someone.
Those ones using elastic6.0 and have "reason":"Unrecognized token .... problem, I was stuck with this as I was passing the --headers='{"Content-Type": "application/json"}' option with --type=data. The type data does not need those explict --headers param, it works with that.

@saifulmuhajir
Copy link

saifulmuhajir commented Dec 13, 2017

Thanks for the tip.
Unfortunately, adding --headers is not working with mapping type >1 with error:

Wed, 13 Dec 2017 09:01:16 GMT | starting dump
Wed, 13 Dec 2017 09:01:16 GMT | got 1 objects from source file (offset: 0)
Wed, 13 Dec 2017 09:01:16 GMT | Error Emitted => {"root_cause":[{"type":"illegal_argument_exception","reason":"Rejecting mapping update to [development] as the final mapping would have more than 1 type: [users, orders]"}],"type":"illegal_argument_exception","reason":"Rejecting mapping update to [development] as the final mapping would have more than 1 type: [users, orders]"}
Wed, 13 Dec 2017 09:01:16 GMT | Total Writes: 0

I am using it with AWS ES6.0.

Is it because of this https://www.elastic.co/guide/en/elasticsearch/reference/6.x/removal-of-types.html ?


Edit: Apparently so, there should be only one mapping type. Thanks everyone.

@aliostad
Copy link

aliostad commented Dec 13, 2017

I am getting @maniankara error (unrecognised token):

I am sending both --type: data and --header. I get an error if I do not send the headers (same Content-Type header [] is not supported).

Wed, 13 Dec 2017 11:49:14 GMT | starting dump
Wed, 13 Dec 2017 11:49:16 GMT | got 100 objects from source elasticsearch (offset: 0)
Wed, 13 Dec 2017 11:49:16 GMT | sent 100 objects to destination file, wrote 100
Wed, 13 Dec 2017 11:49:16 GMT | Error Emitted => {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Failed to parse request body"}],"type":"illegal_argument_exception","reason":"Failed to parse request body","caused_by":{"type":"json_parse_exception","reason":"Unrecognized token 'DnF1ZXJ5VGhlbkZldGNowAIAAAAAAN4jWRZQaERwb3lnaVE1NkVDVE9qSGkwVnRRAAAAAAETdU8WSU8yNjA0NDZUV0N0blgtNTlaVG9RdwAAAAAA7vGUFnBQQjkxXzdsVEpLRlVNbUNSdzdXOXcAAAAAAO7xlRZwUEI5MV83bFRKS0ZVTW1DUnc3Vzl3AAAAAADu8ZYWcFBCOTFfN2xUSktGVU1tQ1J3N1c5dwAAAAABE3VQFklPMjYwNDQ2VFdD...': was expecting ('true', 'false' or 'null')\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@348bd801; line: 1, column: 257]"}},"status":400}
Wed, 13 Dec 2017 11:49:16 GMT | Total Writes: 100
Wed, 13 Dec 2017 11:49:16 GMT | dump ended with error (get phase) => Error: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Failed to parse request body"}],"type":"illegal_argument_exception","reason":"Failed to parse request body","caused_by":{"type":"json_parse_exception","reason":"Unrecognized token 'DnF1ZXJ5VGhlbkZldGNowAIAAAAAAN4jWRZQaERwb3lnaVE1NkVDVE9qSGkwVnRRAAAAAAETdU8WSU8yNjA0NDZUV0N0blgtNTlaVG9RdwAAAAAA7vGUFnBQQjkxXzdsVEpLRlVNbUNSdzdXOXcAAAAAAO7xlRZwUEI5MV83bFRKS0ZVTW1DUnc3Vzl3AAAAAADu8ZYWcFBCOTFfN2xUSktGVU1tQ1J3N1c5dwAAAAABE3VQFklPMjYwNDQ2VFdD...': was expecting ('true', 'false' or 'null')\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@348bd801; line: 1, column: 257]"}},"status":400}

@my1234567
Copy link

@aliostad I get the same error as you by adding both --type: data and --header. I am using ES6.0. Is there any solution? Thanks.

@ferronrsmith
Copy link
Collaborator

#364

@evantahler
Copy link
Collaborator

Closing in favor of the above.

@charlesdmir
Copy link

when is the release?

@indrajithi
Copy link

For those who get error using --headers='{"Content-Type": "application/json"}' try -H'Content-Type: application/json' instead.

@WhisperLoli
Copy link

I try to use -H'Content-Type: application/json',but the same error I got
Error Emitted => {"error":"Content-Type header [] is not supported","status":406}

@WhisperLoli
Copy link

@morgango using your solution and '--limit=10000' I only can dump 10000 records , but my data more than 10000 , I need your help , thanks

@Zutong
Copy link

Zutong commented Jan 20, 2018

I found the way to solve the [Error Emitted => {"root_cause":[{"type":"illegal_argument_exception".....] problem.
Just modify the --headers='{"Content-Type": "application/json"}' to --headers='{"Content-Type": "application/x-ndjson"}'. All problem solved!!!!~~~

@WhisperLoli
Copy link

Error: {"error":"Content-Type header [application/x-ndjson] is not supported","status":406}.
This is the new error!!!! I think it is my elasticsearch version to cause this problem and the version is 6.0.0.

@siddhartha-chandra
Copy link

siddhartha-chandra commented Jan 30, 2018

@smileTou : any luck with figuring out how to dump more than 10000 records? I am facing the same issue. Also, curious to know why is that the error that I got gets removed when I specify a limit 10000 for the command. How does just increasing the number of records affect the way the request body is being parsed?

The error I am referring to is:

Mon, 29 Jan 2018 21:49:32 GMT | Error Emitted => {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Failed to parse request body"}],"type":"illegal_argument_exception","reason":"Failed to parse request body","caused_by":{"type":"json_parse_exception","reason":"Unrecognized token 'DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAIaxFjg5V1pqT1Z2UlVPYXlHMWNtUXltWXcAAAAAAACJWRZNSjMxS2dGUFNnZWZnV0RHbkw1WUNnAAAAAAAAjTIWYXhOVm5FamtRZ3FMTHdJblpIV19vQQAAAAAAAI0zFmF4TlZuRWprUWdxTEx3SW5aSFdfb0EAAAAAAACJWhZNSjMxS2dGUFNnZWZnV0RHbkw1WUNn': was expecting ('true', 'false' or 'null')\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@4651072e; line: 1, column: 457]"}},"status":400}

The Elasticsearch version that I am using is 6.1.0

@siddhartha-chandra
Copy link

Elasticdump 3.3.2 onwards seems to solve it.

This commit, specifically:
ea0178d

@aligit
Copy link

aligit commented May 15, 2018

This might happen because of a self-signed ssl certificate. If that is the case, none of the above solutions will help.
As mentioned in the readme, putting NODE_TLS_REJECT_UNAUTHORIZED=0 before elasticdump command can solve this issue(only linux)

@abhimanyu3-zz
Copy link

I am getting data from elastic search to pandas. My document count is in billions and millions. I noticed a weird thing. When I am matching the numbers in my pandas and kibana. The numbers are not the same. Sometimes its more in kibana and sometimes in pandas for the same time period. IS this normal? or it is happening because of the volume of data I am parsing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests