Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: invalid memory alloc request size 1073741824 #46

Closed
rugging24 opened this issue Dec 15, 2017 · 7 comments
Closed

ERROR: invalid memory alloc request size 1073741824 #46

rugging24 opened this issue Dec 15, 2017 · 7 comments
Labels

Comments

@rugging24
Copy link

Hello ,
Irecently ran into an interesting situation with the plugin.

- cenario :
I made a ddl change in a relatively large table involves adding a new column with a default value, and a type; all in a single transaction . The table has a trigger added to it , but was disabled during the operation.

--
What I found out was that the wal2json plugin while decoding that part of the WAL output a lot of warning messages :

WARNING: column "idl_old_id_column_nanme" has an unchanged TOAST :

And eventually error out with the following error message:

ERROR: invalid memory alloc request size 1073741824

--
beyond this point , there was no decoding possible. Even after I manually provided LSNs and purged out all the data just before the complaining LSN value.

The above error message shows that it is trying to use more work_mem that it could be provided, and a few digging around points to a gradual reduction of the work_mem to prevent this; which didn't help either.

-- My question

  1. As this could lead to some serious problems as data loss is eminent since no decoding can be done ,
    what is the way to avoid this in the future.
  2. What's the best way to fix decoding after it abruptly stopped after a huge transaction. And would not continue again even after a restart.
  3. Is it necessary for all tables that were to be decoded have at least the minimal replica identity even when the decoded data are being discarded simply because they are not needed ?
  4. Is there a plan to eventually implement table/schema filtering while decoding the WAL ?

cheers .

@eulerto
Copy link
Owner

eulerto commented Dec 15, 2017

@rugging24 what is the exact postgres version? I bet you are using a version < 9.4.8 or version < 9.5.3. If so, update the binaries for the latest minor version (9.4.15, 9.5.10) and try again. There is a lot of nasty bugs around logical decoding in those releases.

@rugging24
Copy link
Author

Hi ,
Thanks for the response . My PG version is :

PostgreSQL 9.5.8 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit

@AGanguli
Copy link

We're getting the memory error as well. I think it's happening if there have been a lot of updates while nobody is listening to the replication slot. Then all the changes get sent at once, and I guess there's a 1GB limit.

There should be a maximum message size, after which the updates get split into multiple messages.

@eulerto
Copy link
Owner

eulerto commented Mar 15, 2018

@AGanguli Do you have a test case for me? What is your exact PostgreSQL version?

@AGanguli
Copy link

I can try to construct one.

This is 9.6.6. Unfortunately it's Amazon's RDS service - no idea what sort of modifications they've made.

@rcoup
Copy link
Contributor

rcoup commented Mar 15, 2018

From AWS (16 Nov):

Currently we are using 645ab69 as the latest supported wal2json commit.

@eulerto
Copy link
Owner

eulerto commented Mar 27, 2018

Use parameter write-in-chunks. Unfortunately, this is a limitation of PostgreSQL allocation infrastructure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants