Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove "development" stage #563

Closed
tj opened this Issue Jan 31, 2018 · 5 comments

Comments

Projects
None yet
2 participants
@tj
Copy link
Member

commented Jan 31, 2018

Right now the "development" stage is a little confusing for a few reasons:

I wouldn't really call anything running live dev, dev == local to me, but it used to utilize $LATEST in Lambda so it made a bit more sense at the time.

Also now you can do the following to override up start's local command, however this is also what will run in Lambda, which is confusing.

{
  "name": "app",
  "stages": {
    "development": {
      "proxy": {
        "command": "gin --port $PORT"
      }
    }
  }
}

Also we just really don't need two staging envs haha... later if we have custom stages then you could do whatever you like of course.

Changes

  • Remove "development" and treat up as "staging"
  • If you have a development stage mapped to a domain already you will have to change it to staging
  • up env's vars mapped to "development" will be used for up start only

Work

  • Default to "staging"
  • Remove mention of "development" aside from local use
  • QA up start env vars
  • Update examples repo
  • Update old blog posts

Votes



@tj tj closed this in 272a2a6 Feb 1, 2018

kaihendry added a commit to kaihendry/prazespeed that referenced this issue Feb 19, 2018

@kaihendry

This comment has been minimized.

Copy link

commented Feb 19, 2018

Argh, just hit this whilst going out my mind since my endpoint was not updating.

Replacing development with staging in my up.json's domain stanza doesn't seem to do the trick. Even after up stack apply I hit a already exists in stack arn:aws:cloudformation:ap-southea...

Bring down the stack is especially painful since I have policies attached to the role. Now CloudFront seems to be just hanging trying to rebuild the stack.

a

Is there some easier migration strategy I am missing ? 😭

@tj

This comment has been minimized.

Copy link
Member Author

commented Feb 19, 2018

@kaihendry CloudFormation seems like it doesn't understand one is removed and another is created—Terraform handled this kind of thing fine :(— I think you may have to remove development from up.json and up stack plan / up stack apply and then add staging

@kaihendry

This comment has been minimized.

Copy link

commented Feb 20, 2018

Downtime for route53 records to be updated is really quite painful whilst CloudFormation is UPDATE_IN_PROGRESS.

Is it a reasonable workaround to not manage domains in up.json and just manually update custom-domain-names, to minimise downtime? I guess the con is that my domain isn't managed anymore by CloudFormation. Not a bad thing since now at least have some control and I can switch with no downtime.

@kaihendry

This comment has been minimized.

Copy link

commented Feb 20, 2018

Oh btw, do the docs need updating https://up.docs.apex.sh/#configuration.stages ?

@tj

This comment has been minimized.

Copy link
Member Author

commented Feb 20, 2018

Yeah it's not ideal but I can't see too many people swapping production's domain, so I guess worst-case of a bit of downtime for staging is bad, but not as bad, I don't want to get too crazy with encouraging manual work if it can be avoided.

I heard a rumour that CloudFront's propagation will be "instant" sometime soon too, I realllllly hope that's true 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.