Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elastic search throwing connection error when trying to use bulk API to index documents #7118

Closed
karthikdandavathi opened this issue Aug 1, 2014 · 1 comment

Comments

@karthikdandavathi
Copy link

PLease help me to understand the following error

def index_logs(self, index_name, doc_type):
es = Elasticsearch()
logger.info ("Connection to elastic search successful")
no_of_entries = 0
metadata = {}
source = {}
entries = []

    for i in range(len(self.store_batt_life_metrics)):
        d = {'_index': 'my-index','_type': 'metrics','_id': DukeMetricsLogic.get_unique_ids(),
                 '@duke_build_numbers': self.duke_builds[i],
                 '@duke_batt_life': self.store_batt_life_metrics[i],
                 '@timestamp': self.duke_time_stamps[i]}
        for key, value in d.items():

            if (key[0] == "_"):
                metadata[key] = value
            else:
                source[key] = value
        entries.append({'index': metadata})
        entries.append(source)
        no_of_entries += 1
    logger.info(": entering bulk indexing")
    es.bulk(index=index_name,doc_type=doc_type,body=entries)
    logger.info("Indexing completed and no. of entries pushed in "
                    "this flush are " + str(no_of_entries))

And i get the following error log when i run it

/usr/bin/python2.7 /home/local/ANT/dandavat/git/power/dukeMetrics/duke_metrics_main.py
INFO:duke_metrics_logic:connecting to S3 to get data
INFO:duke_metrics_logic:Storing battery life to a list
INFO:duke_metrics_logic:getting duke build information
INFO:duke_metrics_logic:getting duke timestamps info..
INFO:duke_metrics_logic:Connection to elastic search successful
INFO:duke_metrics_logic:: entering bulk indexing
WARNING:elasticsearch:POST http://localhost:9200/my-index/metrics/_bulk [status:N/A request:10.101s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 46, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=headers, *_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 223, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 336, in _make_request
self, url, "Read timed out. (read timeout=%s)" % read_timeout)
ReadTimeoutError: HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)
WARNING:elasticsearch:Connection <Urllib3HttpConnection: http://localhost:9200> has failed for 1 times in a row, putting on 60 second timeout.
WARNING:elasticsearch:POST http://localhost:9200/my-index/metrics/_bulk [status:N/A request:10.098s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 46, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=headers, *_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 223, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 336, in _make_request
self, url, "Read timed out. (read timeout=%s)" % read_timeout)
ReadTimeoutError: HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)
WARNING:elasticsearch:Connection <Urllib3HttpConnection: http://localhost:9200> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST http://localhost:9200/my-index/metrics/_bulk [status:N/A request:10.098s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 46, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=headers, *_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 223, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 336, in _make_request
self, url, "Read timed out. (read timeout=%s)" % read_timeout)
ReadTimeoutError: HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)
WARNING:elasticsearch:Connection <Urllib3HttpConnection: http://localhost:9200> has failed for 3 times in a row, putting on 240 second timeout.
WARNING:elasticsearch:POST http://localhost:9200/my-index/metrics/_bulk [status:N/A request:10.104s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 46, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=headers, *_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 223, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 336, in _make_request
self, url, "Read timed out. (read timeout=%s)" % read_timeout)
ReadTimeoutError: HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)
WARNING:elasticsearch:Connection <Urllib3HttpConnection: http://localhost:9200> has failed for 4 times in a row, putting on 480 second timeout.
Traceback (most recent call last):
File "/home/local/ANT/dandavat/git/power/dukeMetrics/duke_metrics_main.py", line 24, in
sys.exit(main())
File "/home/local/ANT/dandavat/git/power/dukeMetrics/duke_metrics_main.py", line 20, in main
dukemetricslogic.index_logs(index_name='my-index', doc_type='metrics')
File "/home/local/ANT/dandavat/git/power/dukeMetrics/duke_metrics_logic.py", line 188, in index_logs
es.bulk(index=index_name,doc_type=doc_type,body=entries)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 68, in _wrapped
return func(_args, params=params, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/init.py", line 646, in bulk
params=params, body=self._bulk_body(body))
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 276, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 51, in perform_request
raise ConnectionError('N/A', str(e), e)
elasticsearch.exceptions.ConnectionError: ConnectionError(HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)) caused by: ReadTimeoutError(HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10))

Process finished with exit code 1

@clintongormley
Copy link

Hi @karthikdandavathi

Please ask questions like these in the forum. This issues list is for bug reports and feature requests. When you mail the forum, I suggest you include the errors that you see in the Elasticsearch logs. The errors you include here are from the application, and are not terribly helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants