New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Importing Postgres backup to Heroku from AWS S3 #120
Comments
One reason may be that while using the |
In my
I presign the aws URL like so:
I then invoke heroku’s restore command (using the signed URL generated above) :
So it failed. The tail of that traceback suggests more information is available by using the prescribed command. Here it is:
That error is pointing to my (potential mis-) use of |
@enoren5 brother checking. |
@enoren5 Brother
aws s3 presign s3://postgres-restore-tarot-juicer/2021June25_8801def6-27e0-4b88-875b-842be5704f0b
# After executing the above command you will get a signed url ( <SIGNED URL> ) now then you can put that in Heroku command heroku pg:backups:restore '<SIGNED URL>' HEROKU_POSTGRESQL_PUCE_URL --app tarot-prod
# They just mentioned the app name only in command, first run with it then if error appears try with remote name like `tarot_prod` not `heroku`
|
The error while using I believe this error is not related to the root cause of restore failure, but is there because the info command does not know which postgres instance to run the command against. Hopefully will give you some details after running it successfully. Regarding the presigned URL, do verify once after generating the URL, that you can visit the URL from your browser and access the content without any |
@UmarGit: I used tried this command exactly as I wrote above. I presigned an S3 URl. To quote myself above and to reiterate, here is the output of that command:
Next you suggested another command:
I already tried this (as I documented in my comment from yesterday). To quote myself again:
Take note of the switches at the end. You may need to scroll to the right. To quote myself again, here is the output of this command:
However I have some new progress. Here is the output from @alienware's recommended command:
Based on this output, it looks like there is still a permissions issue. |
When I navigate to the presigned https:// web address (my S3 bucket), here is what it says: This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>3600</X-Amz-Expires>
<Expires>2021-07-07T01:06:59Z</Expires>
<ServerTime>2021-07-07T10:53:35Z</ServerTime>
<RequestId>3WD5533RANDGPJK8</RequestId>
<HostId>AD2cM2WTE171ZEQdPr7ELlZkeesO/XfSaT5pz6iGIFjRkvX++31cRjyl6wWL/skukRuXY3gvotU=</HostId>
</Error> |
The link expires after 3600s or 60m, |
Thank you, @alienware! I presign’ed a fresh URI. It still won’t backup. However based on your previous advice, I tried opening the URI in my web browser which is turning up a new XML error message. Here it is in full:
So the problem now is pointing to the region / location, as @alienware already noted a few days previous. I looked into this and on Google I encountered the AWS CLI User Guide for command line options. I figure I need to change the
When I go to any 3 of those links, I am getting a new XML error:
I'm stabbing in the dark here now. Any ideas for what I could try next? |
That's good news, all progress comes with new problems and learning. 😄 This seems like a AWS account secrets configuration issue in your machine/terminal. You should make sure that your You are not facing incorrect credentials issue while generating presigned URL as you are generating such a link locally, to be used at a later time, and the access validation is done when you access the link and hit S3 service. The most likely resolution, according to me, may require creating a new IAM programmatic user and re-configure awscli using Best wishes! 👍 |
You were right, @alienware, the issue was with my aws configuration. There were two AWS_ACCESS_KEY_ID's in my AWS console. I just made the right one active. For good measure, I exported these variables in my shell like so:
Eureka! That worked! I successfully uploaded and restored my Postgres binary on Heroku. It's also worth pointing out that to locate the unique AWS configuration variables that I needed to export in my shell in my AWS Console I leveraged the answers to this SO question. Thanks alienware for your guidance and for your guidance as well, @UmarGit! |
Happy to be of some help @enoren5. I got to know of your repository and requirements through the bounty project that you had posted on Upwork. If your offer is still open, I would love to take a stab at the whole $85 bounty by helping to resolve the rest of the issues after getting familiar with your work. |
I thank you for your support so far @alienware but I have already hired. In my job description I did clearly state this (verbatim):
Most Upwork freelance job postings for web development created by other contractors don't usually include links to GitHub repos. You saw my repo and began work before you and I agreed to work together. So your comments are 'technically' volunteer work. I'm not trying to be mean or an asshole, but I have already funded my Upwork contract with a freelancer who is currently working on my other requirements. I can't change that now. I'm not going to pay two people $85 to complete the same requirements. It's way too late. I apologize. Next time I make a job posting, I will try to make it even more clear that GitHub Issue comments and PR's will not be compensated until we agree to partner together. Or perhaps to avoid this situation, I should just not post a link to my GitHub repo in the job description at all. =/ |
Thank you @enoren5 for detailing the situation to me and please, there is no need for you to worry or apologize. I was aware of your note regarding when to begin work; I felt the requirements were much easier to approach due to you posting your repo and I would say that it is a much welcome approach to portray requirements. I absolutely understand if you have already concluded your freelancer search. There is no expectation of any payment from my side; I only wanted to apply for the job in case the offer was still open. No worries, I'll apply the next time. 😀 |
I’m learning how to handle Postgres instances by backing them up and restoring them on Heroku for a Django project (a small rudimentary CMS). The amount of data is a few hundred kilobytes because it's just text that I am storing in my my db. I'm practicing backups and restores just to learn for fun.
I realize this is loosely related to Python/Django, but it does fall into the general category of development / programming. I hope my post is welcome here.
I downloaded the binary data to my local machine using this particular section of the Heroku doc.
The next step was to create an AWS account, including setting up Access Keys which I located in the dashboard and entered them into my local dev environment. I named my bucket. I uploaded the binary to S3.
I’ve made it all the way to the end of Heroku’s import Postgres guide.
I install the
awscli
package with pip which enabled me to presign my s3 bucket (which succeeded).I am right at the final step of importing my backup to Heroku Postgres. I am so close!
My traceback at this point indicates that Heroku is expecting an HTTP 200 (request has succeeded) but instead it receives an HTTP 400 (can’t process) ‘due to the source URL being inaccessible’. This points towards the restrictive permissions in place on my AWS S3 bucket.
You can find my traceback in full at the bottom issue.
With regards to my AWS S3 bucket, in the dashboard, the main Permissions switch relevant here is the “Block all public access” option. Whether this checkbox is enabled or disabled (I carefully tried both), I encountered the same HTTP 200/400 in my traceback. This is where I believe the issue is.
I’m not sure what else to try. I’m also a little concerned that with the vast number of variables available for Amazon’s S3 service, I don’t know how I might share or export my configuration nicely for you people to take a closer look. What other information could I provide to better help you people help me?
Here is the restore command I am using:
Right at the end there, it recommends to use this info command:
The text was updated successfully, but these errors were encountered: