Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Destroyed app leaks config, logs, etc. into a re-created app with the same name #4761

Closed
davidcelis opened this issue Nov 20, 2015 · 3 comments

Comments

@davidcelis
Copy link

I earlier created a Hubot app on my deis cluster (v1.12.1) named b9. I started randomly getting 404s for every command I tried to run against it (deis apps:logs, deis ps, etc.) so I decided to try destroying it and re-creating it. For some reason, I had run deisctl ssh router and fleetctl destroy its Procfile process before I could attempt to destroy it. It wasn't listed in deis apps:list despite having an entry in the PostgreSQL database's api_app table.

After destroying the application, I re-created it and set to reconfiguring. After attempting to set the third configuration value, I noticed something odd:

% deis config:set HUBOT_SLACK_TOKEN=...
Creating config... done, v10

=== b9 Config
...

That v10 appeared despite not having run any sort of git push yet. And there were only two config options set before. Additionally, HUBOT_SLACK_TOKEN didn't end up getting set. I tried setting it again and that time it worked as expected. When I finally ran git push deis master, it informed me that it had launched v5 of my app. Running deis apps:logs then, sure enough, showed log entries from my old application.

My app appears to be totally hosed and I have no idea how to recover. Destroying it seems to leave unwanted pieces of state and then the re-created app behaves strangely.

@krancour
Copy link
Contributor

krancour commented Dec 1, 2015

I started randomly getting 404s for every command I tried to run against it (deis apps:logs, deis ps, etc.)

If this was happening, you can't trust anything you did after that point. So the proximate cause of that symptom is what we need to get to the bottom of.

I've seen this happen just once before and it was on another guy's laptop at a conference. We parted ways before getting to the bottom of it, but my theory is that the database may have been flapping. If it were, it would explain the 404s at the start of your narrative as well as the surprising presence of old configuration towards the end of it. Whether you're talking about adding a new record or deleting an old one, if the database isn't up long enough for recent changes to have been backed up, then when it restarts the state that is in the database will not match reality.

This is just a theory, but if you have any logs from the database at around the time this occurred they may be enlightening. I have a strong suspicion that is was experiencing some problems that led directly to what you reported.

@davidcelis
Copy link
Author

Unfortunately, time was running out for me and I had to re-build my cluster to get my apps back up and running. I couldn't afford to keep the old servers around and pay for them but I realized later that I hadn't saved any logs. So far the new cluster has been running just fine but I'll report back if anything like this happens again. Sorry!

@bacongobbler
Copy link
Member

This is not something we'll be able to get to for the LTS release (#4776). If you happen to reproduce please let us know. Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants