-
Notifications
You must be signed in to change notification settings - Fork 112
_mapping is not duplicated #58
Comments
When I create mapping before importation from a file, mapping keep correct. |
Thanks, glad you like the tool. What version of ES are you exporting from and what version are you exporting to? Unfortunately those errors don't show me what about the mapping didn't work, it rather looks like the connection didn't work. Typically ES will say what about a mapping it doesn't like. Maybe some information about the mapping itself would be useful. |
I'm using elasticsearch-1.1.0 and I try to export/import on the same server/Node. Maybe it's an open/close socket issue with the "http.request" ? Can we set log in DEBUG mode with exporter.js ? |
Well the exporter doesn't have any additional log output, it's either on or off. To configure Log output of Elasticsearch you need to edit the conf/logging.yml and restart the server. |
For information same issue with nodejs/exporter.js on elasticsearch host : |
I was able to replicate the problem. Expect a fix soon. |
So turns out the part that wasn't working was the reporting. Successful responses were mistakenly identified as errors. In my tests the mapping is created and looks fine. Could you double check though if your target mapping is the same as the source mapping? I do get reports of mapping being dynamic, but that could be because the source mapping is also dynamic. |
Since there hasn't been an update for a while, I'm closing this issue for now until new feedback comes in. |
I'm also experiencing the same issues, using the last version of both the exporter and elasticsearch. Whenever I try copying a full index, to the same or to a different server, the copy always goes without the mapping from the source index. Workaround seems to be creating the index first with the correct mapping and then running the exporter. |
Thanks for reporting the error. Do have some more output or information that I could use to track down the problem? |
Well, the logs don't give out much information, basically all I get is
but the mapping from the source index is not applied and instead we get the dynamic mapping on the destination. Tested on ES v1.1.2 and 1.2.3. |
I just tried first dumping an index into a file and then importing into ES but the mapping issue remains, despite the fact that I can see the correct mapping in the .meta file generated by the export to file. |
Thanks that does help somewhat. Can you share the .meta file with me that doesn't work? |
Hi mallocator, Thanks for your module. I'm experiencing the same kind of issue. I will dump you data. For some reasons, in the function
data.hits is undefined Error reported :
ES : 1.2.2 used as source and target |
A fix has just been posted by @ceilingfish that might resolve mapping issues. Give it a try if you can with the current master build. |
If this is still a problem I'll reopen this issue |
Hello,
Firstly thanks for your module : it's very simple ans useful !
I get a error when I try to duplicate an index, the mapping is not correct.
node exporter.js -a -i -j <new_index>
In console log I get the message :
<...>
Waiting for mapping on target host to be ready, queue length 400
Waiting for mapping on target host to be ready, queue length 450
Waiting for mapping on target host to be ready, queue length 500
Host phmbusllogb01:9200 responded to PUT request on endpoint /lanceur_bkp with an error
Mapping is now ready. Starting with 500 queued hits.
Host phmbusllogb01:9200 responded to PUT request on endpoint /lanceur_bkp with an error
Mapping is now ready. Starting with 0 queued hits.
Processed 100 of 2268 entries (4%)
Processed 700 of 2268 entries (31%)
<...>
When I go to : http://:9200/<new_index>/_mapping I don't have the original mapping but the dynamic mapping
in ES Log :
<...>creating index, cause [api], shards [5]/[0], mappings [mappings]
<...>update_mapping XXXXX
Edit :
In DEBUG mode on ES side I get an exception :
[2014-06-25 14:15:41,229][DEBUG][cluster.service ] [XXXXX] processing [routing-table-updater]: execute
[2014-06-25 14:15:41,230][DEBUG][cluster.service ] [XXXXX] processing [routing-table-updater]: no change in cluster_state
[2014-06-25 14:15:44,458][DEBUG][cluster.service ] [XXXXX] processing [create-index [lanceur_bkp], cause [api]]: execute
[2014-06-25 14:15:45,399][DEBUG][http.netty ] [XXXXX] Caught exception while handling client http traffic, closing connection [id: 0x1ddde1db, /XXXXX:56038 => /XXXXX:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
[2014-06-25 14:15:45,399][DEBUG][http.netty ] [XXXXX] Caught exception while handling client http traffic, closing connection [id: 0x85b4aa05, /XXXXX:56134 => /XXXXX:9200]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
The text was updated successfully, but these errors were encountered: