Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

App Deploy (gRPC series) #211

Merged
merged 23 commits into from
Jul 14, 2017
Merged

App Deploy (gRPC series) #211

merged 23 commits into from
Jul 14, 2017

Conversation

drgarcia1986
Copy link
Contributor

@drgarcia1986 drgarcia1986 commented Jul 12, 2017

related #203

It's a big pull request with tons of code change on sensible features, please, pay attention and don't hesitate to point errors or even improvements.

BTW, now Teresa will delete "build pods" after they finish their job.

Good Luck 🇯🇲


This change is Reviewable

@drgarcia1986 drgarcia1986 mentioned this pull request Jul 12, 2017
26 tasks
Copy link
Contributor

@aguerra aguerra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most comments are suggestions, the point of attention is keeping the deploy files in memory. Great job.


currentClusterName, err := getCurrentClusterName()
if err != nil {
fmt.Fprintln(os.Stderr, "error on read config file:", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess "error reading config file is better"

}
if n == 0 {
break
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe check for io.EOF

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, much better.

@@ -19,6 +19,9 @@ type Operations interface {
Create(user *storage.User, app *App) error
Logs(user *storage.User, appName string, lines int64, follow bool) (io.ReadCloser, error)
Info(user *storage.User, appName string) (*Info, error)
TeamName(appName string) (string, error)
Meta(appName string) (*App, error)
HasPermission(user *storage.User, appName string) bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the name Meta is a little bit misleading as it returns an app instance but I can't think of something better now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me too.
I'll think again about this name, but for now, I don't have any idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me know if you find a better name.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about Get ?
I'm not sure because that method returns only information about the app, not a powerful struct with useful methods. BUT, is better than Meta? (or not? 🤔 )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It' better, perhaps Instance.


func TestNewBuildSpec(t *testing.T) {
expectedDeployId := "123"
expectedTarBallLocation := "narnia"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe mordor :trollface:

go func() {
defer w.Close()
if err = ops.buildApp(tarBall, a, deployId, buildDest, w); err != nil {
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should log the buildApp error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I forgot :/

scanner := bufio.NewScanner(r)
for scanner.Scan() {
c <- fmt.Sprintln(scanner.Text())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we should check scanner.Err() at the end of the loop, not sure though.

Copy link
Contributor Author

@drgarcia1986 drgarcia1986 Jul 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If raised an error we can't do much more than log the error :/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough

}
defer rc.Close()

deployMsgs := sendMsgsToAChannel(rc)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe rename sendMsgsToAChannel -> channelFromReader

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😱 🏆 This name is much better on a monumental scale.

}
io.Copy(w, stream)

if err = k.waitPodEnd(pod, 1*time.Second, 5*time.Minute); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For deploy safety we need more than 5 minutes of finish timeout, maybe more args. Also we may increase the poll interval (less cpu intensive)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, What do you think about 30 minutes for finish timeout and 3 seconds for check interval?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's ok.

return int(state.ExitCode)
}
}
return 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe return error if the pod is running

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you say that because Terminated is a pointer and it can be nil?
I'll prevent this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I call this function on a running pod I would expect an error: pod is still running, no exit code yet. But this is 💅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha!!! I'll change

func newInClusterK8sClient() (Client, error) {
conf, err := restclient.InClusterConfig()
func (k *k8sClient) CreateDeploy(deploySpec *deploy.DeploySpec) error {
replicas := k.currentPodReplicasFromDeploy(deploySpec.Namespace, deploySpec.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a django style name CreateOrUpdateDeploy

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like functions with more than one proposal but you're right, the current name does not reflect the reality.

Copy link
Contributor

@aguerra aguerra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔁

@drgarcia1986 drgarcia1986 merged commit 8bd4daa into master Jul 14, 2017
@drgarcia1986 drgarcia1986 deleted the dg-app_deploy-grpc branch July 14, 2017 16:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants