Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occasional lockups dumping planet #24

Open
tomhughes opened this issue Sep 26, 2021 · 12 comments
Open

Occasional lockups dumping planet #24

tomhughes opened this issue Sep 26, 2021 · 12 comments

Comments

@tomhughes
Copy link
Contributor

tomhughes commented Sep 26, 2021

We've had a couple of instances - one in July (openstreetmap/operations#552) and one this week (openstreetmap/operations#568) where planetdump-ng has experienced some sort of thread deadlock and stopped making progress.

I grabbed a full backtrace of all threads this time (https://gist.github.com/tomhughes/250e1504f4689fc31a0ca4e0ab4e029e) and from analysing the output it looks like it had completed nodes and ways and was in the middle of the relations when it stopped.

@tomhughes
Copy link
Contributor Author

We had one good run but it looks like last week locked up again :-(

@zerebubuth
Copy link
Owner

Weird that this is happening so often now. 🤔 I wonder if something changed?

Thanks very much for the thread backtraces, they were very helpful. It looks like one of the output writer threads has died at some point while outputting the relations and this isn't handled properly.

It looks like the incron script that runs planet-dump-ng should send me any output, but the last time it sent me anything was July 2020... So either the output isn't being written to disk, or the script isn't mailing it. If it happens again, please look if there's a /tmp/planetdump.log.XXX file. If it's non-empty, then it might give a better clue what's happening.

@tomhughes
Copy link
Contributor Author

Unfortunately the wrapper script deletes the log after it has mailed it so it's no longer there.

I assume it must have been empty though, or it should have been emailed to you as you say and I see no signs of that.

zerebubuth added a commit that referenced this issue Oct 11, 2021
There have been some lock-ups recently running planet-dump-ng in production (#24). Thanks to thread backtraces, it seems that a writer thread was dying (although there was no output?) and therefore no longer participating in the barrier to pump data from the reader, so the whole program was locking up.

The new behaviour is for the dying thread to still participate in pumping messages, but without the calls to the output writer. If the reader thread encounters an exception, it will abort. Hopefully this means that if a single writer dies, we get all the other output, and if the reader dies then we get a crash instead of a hang.
@zerebubuth
Copy link
Owner

Yeah, I thought the same, but the log file should never be empty. Until July 2020, I was getting weekly confirms of the form:

Writing changesets...
Writing nodes...
Writing ways...
Writing relations...
Done

real 3530m46.156s
user 31670m53.427s
sys 972m26.327s

But then I got a few errors around the 14-16th July, and nothing since. (Clearly I wasn't doing an awesome job noticing these emails, or I'd have realised they'd stopped coming before now.)

I think I made a change that could help (assuming this is how the thread exits...). Please could you try version 1.2.2 and see if that helps? https://github.com/zerebubuth/planet-dump-ng/releases/tag/v1.2.2

@tomhughes
Copy link
Contributor Author

I've deployed that, and I think I've figured out the email problem and fixed it. I think you've broken something though because I get a stream of errors now if I try and start the dump:

pg_restore: error: one of -d/--dbname and -f/--file must be specified

I assume this is something to do with e2d9c70?

@zerebubuth
Copy link
Owner

Ooops, fail. Looks like the machine I was testing on had a truly ancient version PostgreSQL (9!). I reverted that commit and pushed a new version, v.1.2.3.

https://github.com/zerebubuth/planet-dump-ng/releases/tag/v1.2.3

@mmd-osm
Copy link

mmd-osm commented Oct 11, 2021

I ran a quick test to validate that a simulated runtime_error in pbf_writer::relations is handled ok now:

10000 = very first relation id in dump:

    if (r.id == 10000) { BOOST_THROW_EXCEPTION(std::runtime_error("bla")); }

Result on 3e48263:

Writing changesets...
Writing nodes...
Writing ways...
Writing relations...
EXCEPTION: writer_thread(1): pbf_writer.cpp(613): Throw in function virtual void pbf_writer::relations(const std::vector<relation>&, const std::vector<relation_member>&, const std::vector<old_tag>&)
Dynamic exception type: boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::runtime_error> >
std::exception::what: bla

<stuck>

Result on b190303:

Finishing thread with 24 bytes
Writing changesets...
Writing nodes...
Writing ways...
Writing relations...
EXCEPTION: writer_thread(1): pbf_writer.cpp(613): Throw in function virtual void pbf_writer::relations(const std::vector<relation>&, const std::vector<relation_member>&, const std::vector<old_tag>&)
Dynamic exception type: boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::runtime_error> >
std::exception::what: bla
. Trying to continue...
Done

echo $? returns 0 in case of an issue. This may not be ideal for monitoring...

@mmd-osm
Copy link

mmd-osm commented Oct 12, 2021

I was wondering if the issue was reproducible, i.e. when processing the same db dump twice, it would show exactly the same behavior as with the previous run.

Given that this happened in relations (which tend to be much larger than ways and nodes), I could imagine that a certain combination of large relations might trigger some rare bug in the pbf writer code, e.g. due to lack of space in a pbf block.

@mnalis
Copy link

mnalis commented Sep 20, 2022

So, do the mails with errors arrive now at least?

It seems planet dump has failed again? Today is 2022-09-20, and last one is planet-220905.osm.pbf (created on 2022-09-10).

@tomhughes
Copy link
Contributor Author

Yes it failed, yes I got email, yes our alerts went off, yes we are continuing to investigate.

@tomhughes
Copy link
Contributor Author

As you can see if you look at #25 which is the actual ticket where we're currently dealing with this...

@tomhughes
Copy link
Contributor Author

Which is actually a totally separate issue - it's a crash rather than a lockup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants