You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the cp command does not work I'm trying to move the data using convox exec -- pg_dump ... > /tmp/export.dump and it seems to corrupt the data. I've verified that I can restore the dump on the remote machine but it will not work locally as pg_restore says it reached end of file.
I guess there's some limit reached in the websocket?
I guess the question you refer to is "If you wrap your dump in a tar or something, does that work for you?" in that case the dump file in the example is the output of pg_dump -fc which I believe is a tar already so I don't see how tarring might help. And wrapping in base64 worked as mentioned.
Today I've been running convox proxy to access some APIs we have internally. The API in question is an HTTP API where I POST a JSON like {"csv": "<50kbish csv here...>"}. The CSV is then processed and it seems like a few rows have been dropped because they are invalid or worse not dropped and inserted as invalid into our database. Without the proxy I cannot reproduce it so I suspect this packet dropping is related.
The text was updated successfully, but these errors were encountered:
This time I used, or tried to use, convox proxy to load a few thousand lines of csv with clickhouse-client --query "INSERT INTO table FORMAT CSV" < /tmp/data.csv and it took me a few hours to realize it's the proxy that causes the issues by jumbling packets, which explains the weird errors where the very simple csv with only uuids ends up as jumbled binary data.
It worked if I piped in 100 lines instead but over some threshold it breaks, my guess would be some websocket buffer size. In this case I had no way of wrapping the data in base64.
Issue Description
Since the cp command does not work I'm trying to move the data using
convox exec -- pg_dump ... > /tmp/export.dump
and it seems to corrupt the data. I've verified that I can restore the dump on the remote machine but it will not work locally as pg_restore says it reached end of file.I guess there's some limit reached in the websocket?
/ # convox exec f9972d13c5bd -- md5sum /tmp/export.dump
dd7984707e673a944dbd777f679c09a1 /tmp/export.dump
/ # ls -la /tmp/export.dump
-rw-r--r-- 1 root root 2101019 Jul 8 06:36 /tmp/export.dump
$ convox exec f9972d13c5bd -- cat /tmp/export.dump > /tmp/export-2.dump
$ md5sum /tmp/export-2.dump
e85322c7779327d943f9045ef9695fe2 /tmp/export-2.dump
$ ls -la /tmp/export-2.dump
-rw-r--r-- 1 robert wheel 2102198 Jul 8 08:49 /tmp/export-2.dump
Actually it seems to be some kind of encoding issue. Possibly it's trying to convert to utf8?
$ convox exec f9972d13c5bd -- base64 /tmp/export.dump > /tmp/export-2.b64.dump
$ base64 -d /tmp/export-2.b64.dump > /tmp/export-3.dump
$ md5sum /tmp/export-3.dump
dd7984707e673a944dbd777f679c09a1 /tmp/export-3.dump
I guess the question you refer to is "If you wrap your dump in a tar or something, does that work for you?" in that case the dump file in the example is the output of
pg_dump -fc
which I believe is a tar already so I don't see how tarring might help. And wrapping in base64 worked as mentioned.Today I've been running
convox proxy
to access some APIs we have internally. The API in question is an HTTP API where I POST a JSON like{"csv": "<50kbish csv here...>"}
. The CSV is then processed and it seems like a few rows have been dropped because they are invalid or worse not dropped and inserted as invalid into our database. Without the proxy I cannot reproduce it so I suspect this packet dropping is related.The text was updated successfully, but these errors were encountered: