Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Odd issue when going through tutorial - copy csv fails to copy to table [solved] #742

Closed
webventurer opened this issue Sep 20, 2018 · 2 comments

Comments

@webventurer
Copy link

webventurer commented Sep 20, 2018

Setup: Vultr Cloud Compute (VC2) server, Ubuntu 18.04.1 LTS, Postgres 10.5, timescaledb extension 0.12.1

hi guys - I was running through the tutorial here:

https://blog.timescale.com/analyzing-ethereum-bitcoin-and-1200-cryptocurrencies-using-postgresql-downloading-the-dataset-a1bbc2d4d992

Ran this command (as per tutorial):
crypto@oracle:~/tmp/crypto_data$ PGPASSWORD=hidden psql -U crypto -d crypto_data_test -h localhost -p 5433 -c "\COPY crypto_prices FROM crypto_prices.csv CSV"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

Received these errors in /var/log/postgresql/postgres-10-main.log:
2018-09-20 15:22:31.757 UTC [3299] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2018-09-20 15:22:31.757 UTC [3299] HINT: In a moment you should be able to reconnect to the database and repeat your command.
2018-09-20 15:22:31.765 UTC [1902] LOG: all server processes terminated; reinitializing
2018-09-20 15:22:31.790 UTC [3453] LOG: database system was interrupted; last known up at 2018-09-20 15:16:05 UTC
2018-09-20 15:22:32.584 UTC [3453] LOG: database system was not properly shut down; automatic recovery in progress
2018-09-20 15:22:32.590 UTC [3453] LOG: redo starts at 0/1F400050
2018-09-20 15:22:37.965 UTC [3453] LOG: redo done at 0/337FFF58
2018-09-20 15:22:37.966 UTC [3453] LOG: last completed transaction was at log time 2018-09-20 15:22:10.774633+00
2018-09-20 15:22:38.344 UTC [1902] LOG: database system is ready to accept connections
2018-09-20 15:22:38.344 UTC [3459] LOG: TimescaleDB background worker launcher connected to shared catalogs (edited)
Vultr cloud server, Ubuntu 18.04.1 LTS, Postgres 10.5, timescaledb extension 0.12.1

Note: Everytime I run it I get a different error:
ERROR: index row requires 534432 bytes, maximum size is 8191
CONTEXT: COPY crypto_prices, line 402850: "11/14/2015 19:00,1.10E-06,2.20E-06,1.10E-06,2.20E-06,35751.04,0.0777,SLM"

And on another run another different (but related) error:
ERROR: index row requires 534432 bytes, maximum size is 8191
CONTEXT: COPY crypto_prices, line 520255: "3/16/2015 20:00,4.51E-05,5.00E-05,4.41E-05,4.42E-05,0,0,TAG"

Any ideas what happened?

@webventurer
Copy link
Author

And another one:
2018-09-20 16:32:17.573 UTC [4227] crypto@crypto_data_test ERROR: index row requires 534432 bytes, maximum size is 8191
2018-09-20 16:32:17.573 UTC [4227] crypto@crypto_data_test CONTEXT: COPY crypto_prices, line 518124: "3/22/2015 20:00,2.20E-05,2.40E-05,2.20E-05,2.39E-05,12763.37,0.2812,MAX"
2018-09-20 16:32:17.573 UTC [4227] crypto@crypto_data_test STATEMENT: COPY crypto_prices FROM STDIN CSV
2018-09-20 16:32:24.053 UTC [4231] WARNING: relation "pg_attribute" page 465 is uninitialized --- fixing

Seems to get a different error each time I run it

@webventurer
Copy link
Author

Issue resolved

I ran the pgtune (https://pgtune.leopard.in.ua/#/) which generated these parameters for my instance:

DB Version: 10

OS Type: linux

DB Type: dw

Total Memory (RAM): 32 GB

CPUs num: 8

Connections num: 10

Data Storage: ssd

max_connections = 10
shared_buffers = 8GB
effective_cache_size = 24GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 500
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 104857kB
min_wal_size = 4GB
max_wal_size = 8GB
max_worker_processes = 8
max_parallel_workers_per_gather = 4
max_parallel_workers = 8

Problem went away. So it was a configuration issue. Probably worth updating the blog posts to cover same:
https://blog.timescale.com/analyzing-ethereum-bitcoin-and-1200-cryptocurrencies-using-postgresql-downloading-the-dataset-a1bbc2d4d992
https://blog.timescale.com/analyzing-ethereum-bitcoin-and-1200-cryptocurrencies-using-postgresql-3958b3662e51

@webventurer webventurer changed the title Odd issue when going through tutorial - copy csv fails to copy to table Odd issue when going through tutorial - copy csv fails to copy to table [solved] Sep 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants