Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

View the logs for the latest deployment of a config #3943

Merged
merged 3 commits into from
Oct 20, 2015
Merged

View the logs for the latest deployment of a config #3943

merged 3 commits into from
Oct 20, 2015

Conversation

0xmichalis
Copy link
Contributor

This PR adds support for getting the logs for a deployment config:

$ oc logs dc/mysql --follow

will stream the logs for the latest deployment for the mysql deploymentConfig.

Most functionality for viewing logs for older deployments is here, though it won't be exposed for now.

Fixes #3544

@0xmichalis
Copy link
Contributor Author

Currently

[vagrant@openshiftdev sample-app]$ oc get dc
NAME       TRIGGERS                    LATEST VERSION
database   ConfigChange                1
frontend   ConfigChange, ImageChange   0

[vagrant@openshiftdev sample-app]$ oc get pods
NAME                        READY     STATUS    RESTARTS   AGE
database-1-o2191            1/1       Running   0          14s
ruby-sample-build-1-build   1/1       Running   0          19s

[vagrant@openshiftdev sample-app]$ oc deploy database --logs
Error from server: User "test" cannot get deploymentlogs in project "test"

[vagrant@openshiftdev sample-app]$ oc deploy database --logs  --config=openshift.local.config/master/admin.kubeconfig
Error from server: deploymentConfig "database" not found

What am I missing?

@0xmichalis
Copy link
Contributor Author

@deads2k @jhadvig too

@deads2k
Copy link
Contributor

deads2k commented Jul 29, 2015

Don't run the command using admin.kubeconfig, that's using a different namespace, so it's pretty hosed.

You need to add an entry in https://github.com/openshift/origin/blob/master/pkg/cmd/server/bootstrappolicy/policy.go#L78 for the admin, editor, and view roles. Though I'm pretty sure you want deploymentconfig/logs (subresource) as opposed to a top-level resource.

@deads2k
Copy link
Contributor

deads2k commented Jul 29, 2015

Also, how close are your changes to defaulting resources? I think I'd rather carry reasonable patches on kubectl logs if they are close to getting accepted upstream than introduce a --logs flag. That's just hard to use: oc logs pod, oc build-logs, oc deploy --logs?


// NoWait if true causes the call to return immediately even if the deployment
// is not available yet. Otherwise the server will wait until the deployment has started.
NoWait bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about inverting this to something like Immediate or Wait which defaults to false?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cesar added this in BuildLogOptions. I am more worried that this should be upstream. @csrwng does it make sense to have this as a field in PodLogOptions and instead of checking for the build/deployment phase && this field, check for pod phase && this field here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had the same thought re: upstream... it seems unfortunate that we'd need to make an API revision for such generic options.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this doesn't go upstream we won't be able to specify it via oc logs (when oc logs will support builds and deployments).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think in the short term there is a requirement that we use oc logs
for all types of logs. It's a nice to have.

On Wed, Jul 29, 2015 at 10:43 AM, Michail Kargakis <notifications@github.com

wrote:

In pkg/deploy/api/types.go
#3943 (comment):

+type DeploymentLogs struct {

  • kapi.TypeMeta
  • kapi.ListMeta
    +}

+// DeploymentLogOptions is the REST options for a deployment log
+type DeploymentLogOptions struct {

  • kapi.TypeMeta
  • // Follow if true indicates that the deployment log should be streamed until
  • // the deployment terminates.
  • Follow bool
  • // NoWait if true causes the call to return immediately even if the deployment
  • // is not available yet. Otherwise the server will wait until the deployment has started.
  • NoWait bool

If this doesn't go upstream we won't be able to specify it via oc logs
(when oc logs will support builds and deployments).


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/3943/files#r35767333.

Clayton Coleman | Lead Engineer, OpenShift

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note I'm not against trying to solve these, just that it's not required to
ship 3.1

On Wed, Jul 29, 2015 at 4:03 PM, Clayton Coleman ccoleman@redhat.com
wrote:

I don't think in the short term there is a requirement that we use oc logs
for all types of logs. It's a nice to have.

On Wed, Jul 29, 2015 at 10:43 AM, Michail Kargakis <
notifications@github.com> wrote:

In pkg/deploy/api/types.go
#3943 (comment):

+type DeploymentLogs struct {

  • kapi.TypeMeta
  • kapi.ListMeta
    +}

+// DeploymentLogOptions is the REST options for a deployment log
+type DeploymentLogOptions struct {

  • kapi.TypeMeta
  • // Follow if true indicates that the deployment log should be streamed until
  • // the deployment terminates.
  • Follow bool
  • // NoWait if true causes the call to return immediately even if the deployment
  • // is not available yet. Otherwise the server will wait until the deployment has started.
  • NoWait bool

If this doesn't go upstream we won't be able to specify it via oc logs
(when oc logs will support builds and deployments).


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/3943/files#r35767333.

Clayton Coleman | Lead Engineer, OpenShift

Clayton Coleman | Lead Engineer, OpenShift

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton can you make a call on "NoWait" vs. something like "Immediate"? Super confusing name for a boolean.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this consistent with BuildLogOptions for now, we'll fix it in v2 api.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this consistent with BuildLogOptions

it already is

@0xmichalis
Copy link
Contributor Author

Also, how close are your changes to defaulting resources? I think I'd rather carry reasonable patches on kubectl logs if they are close to getting accepted upstream than introduce a --logs flag. That's just hard to use: oc logs pod, oc build-logs, oc deploy --logs?

Those changes are already functional, we just need to reach consensus on kubectl logs -> kubernetes/kubernetes#10707

Tbh I would prefer just oc logs but the upstream PR doesn't get much traffic lately.:)


deployment, err := r.Deploy.GetDeployment(ctx, deployutil.LatestDeploymentNameForConfig(config))
if err != nil {
// TODO: Fallback to latest-1?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Less efficient, but you could solve this by using the same strategy as https://github.com/openshift/origin/blob/master/pkg/cmd/cli/cmd/rollback.go#L311

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be simpler to always just use deployment.Items[0]... is there a case where that's insufficient?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we may have a list of deployments not containing the latest.

But using Items[0] and just checking versions is enough. Updated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on a comment in IRC, I was totally wrong about this... I think what you originally had was right. If you fall back at all, then you might get logs for N-1 if N completed since deployer pods are deleted upon successful completion.

@ironcladlou
Copy link
Contributor

Also, how close are your changes to defaulting resources? I think I'd rather carry reasonable patches on kubectl logs if they are close to getting accepted upstream than introduce a --logs flag. That's just hard to use: oc logs pod, oc build-logs, oc deploy --logs?

I think @smarterclayton had some opinion on this too. I agree that having different log commands for these things is awkward.

@0xmichalis
Copy link
Contributor Author

I see the deployer pod missing after finishing the deployment. Is this expected? @ironcladlou

[vagrant@openshiftdev sample-app]$ oc get pods
NAME                        READY     STATUS    RESTARTS   AGE
database-1-deploy           0/1       Pending   0          2s
ruby-sample-build-1-build   0/1       Pending   0          1s

[vagrant@openshiftdev sample-app]$ oc deploy database --logs
I0730 11:05:22.677982       1 deployer.go:195] Deploying test/database-1 for the first time (replicas: 1)
I0730 11:05:22.701834       1 lifecycle.go:78] Created lifecycle pod database-1-prehook for deployment test/database-1
I0730 11:05:22.702109       1 lifecycle.go:85] Waiting for hook pod test/database-1-prehook to complete
I0730 11:05:33.973449       1 recreate.go:94] Pre hook finished
I0730 11:05:33.973598       1 recreate.go:125] Scaling test/database-1 to 1
I0730 11:05:36.046010       1 lifecycle.go:78] Created lifecycle pod database-1-posthook for deployment test/database-1
I0730 11:05:36.046154       1 lifecycle.go:85] Waiting for hook pod test/database-1-posthook to complete
I0730 11:05:46.810389       1 lifecycle.go:48] Hook failed, ignoring: 
I0730 11:05:46.810418       1 recreate.go:139] Post hook finished
I0730 11:05:46.810425       1 recreate.go:143] Deployment database-1 successfully made active

[vagrant@openshiftdev sample-app]$ oc get pods
NAME                        READY     STATUS       RESTARTS   AGE
database-1-pzlpb            1/1       Running      0          7m
frontend-1-a17ee            1/1       Running      0          4m
frontend-1-i0n9f            1/1       Running      0          5m
ruby-sample-build-1-build   0/1       ExitCode:0   0          7m


// DeploymentClient defines a local interface to a deployment client for testability.
type DeploymentClient interface {
GetDeploymentConfig(ctx kapi.Context, name string) (*deployapi.DeploymentConfig, error)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we've reached agreement to use a client.DeploymentConfigNamespacer

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we've reached agreement to use a client.DeploymentConfigNamespacer

Agree. I would put waitForDeployment behind an interface or typedef'd function for testing, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is a DeploymentConfigNamespacer enough? I mostly need to interact with the deployments (list, watch) so I guess I need a ReplicationControllersNamespacer as well, right?

@0xmichalis 0xmichalis changed the title [WIP] deploy: Support --logs deploy: Support viewing the logs of a deployment Aug 4, 2015
@0xmichalis
Copy link
Contributor Author

This is ready for another review.

@0xmichalis
Copy link
Contributor Author

Verifying Descriptions for Spec: /home/travis/gopath/src/github.com/openshift/origin/api/swagger-spec/api-v1.json
Verifying Descriptions for Spec: /home/travis/gopath/src/github.com/openshift/origin/api/swagger-spec/oapi-v1.json
Description missing for: v1.deploymentlog
FAILURE: Add missing descriptions to api/definitions

Should deploymentlog have a description in api/definitions? I don't see any for buildlog. Also do I really have to install gradle?:)

@deads2k
Copy link
Contributor

deads2k commented Aug 4, 2015

I don't see any for buildlog.

buildlog was grandfathered in, but we want to fully document our REST API, so I doubt we'll allow any more exceptions. You just have to describe what the type is, when to use it, and how to use it.

Gradle isn't required unless you're going to generate something for openshift-docs.

@@ -28,7 +29,9 @@ var KnownValidationExceptions = []reflect.Type{
var MissingValidationExceptions = []reflect.Type{
reflect.TypeOf(&buildapi.BuildLogOptions{}), // TODO, looks like this one should have validation
reflect.TypeOf(&buildapi.BuildLog{}), // TODO, I have no idea what this is doing
reflect.TypeOf(&imageapi.DockerImage{}), // TODO, I think this type is ok to skip validation (internal), but needs review
reflect.TypeOf(&deployapi.DeploymentLogOptions{}),
reflect.TypeOf(&deployapi.DeploymentLog{}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't this object need validation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect this one is only returned, never accepted and could be considered for a KnownValidationExeption (the block above)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the "only returned, never accepted"? This is a virtual subresource, essentially unused, its client interface is masking a call to "deploymentConfigs/log".

@deads2k
Copy link
Contributor

deads2k commented Aug 4, 2015

@smarterclayton @fabianofranz I really don't like the madness of three very different ways to get logs. The difference between builds and deployments on the cli are particularly jarring. What do we want them to look like in the end? I would suggest oc build <subcommand> and oc deploy <subcommand> rather than a mass of conflicting flags.

@@ -175,7 +200,9 @@ func (o DeployOptions) RunDeploy() error {
}
return list, nil
},

LogsForConfigFn: func(config *deployapi.DeploymentConfig, opts deployapi.DeploymentLogOptions) (io.ReadCloser, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks suspiciously like a DeploymentLogsNamespacer or DeploymentLogInterface

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything here looks suspiciously like an already existing client interface, I am just consistent with the existing design.

@deads2k
Copy link
Contributor

deads2k commented Aug 4, 2015

@smarterclayton @fabianofranz I really don't like the madness of three very different ways to get logs. The difference between builds and deployments on the cli are particularly jarring. What do we want them to look like in the end? I would suggest oc build and oc deploy rather than a mass of conflicting flags.

@Kargakis I think you should factor out a DeploymentLogOptions object. Doing that should allow easier composition and will make it a little easier to review.

@@ -33,6 +34,7 @@ var MissingPrinterCoverageExceptions = []reflect.Type{
reflect.TypeOf(&authorizationapi.SubjectAccessReview{}),
reflect.TypeOf(&authorizationapi.ResourceAccessReview{}),
reflect.TypeOf(&deployapi.DeploymentConfigRollback{}),
reflect.TypeOf(&deployapi.DeploymentLogOptions{}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't add this here. If there's not going to be a printer, add it above with a justification. Since we don't take a -f for oc get, it's probably good enough to say that we don't ever return this object from the API.

@0xmichalis
Copy link
Contributor Author

@deads2k is there any other way except a generator to have our own flags without maintaining our own version of logs? I am open to anything that would do that. Maybe pass the flagset into the factory? Does that feel like a right thing to do?

@0xmichalis
Copy link
Contributor Author

Until we come up with a sane solution for older deployment logs can we merge this? Is it worth being blocked? @ironcladlou @deads2k @smarterclayton

deployRollback := &deployrollback.RollbackGenerator{}
deployRollbackClient := deployrollback.Client{
DCFn: deployConfigRegistry.GetDeploymentConfig,
RCFn: clientDeploymentInterface{kclient}.GetDeployment,
GRFn: deployRollback.GenerateRollback,
}
configClient, deploymentClient := c.DeploymentConfigClients()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why call DeploymentConfigClients twice?

@smarterclayton
Copy link
Contributor

For older deployments, you have three options, add generic flags to logs,
add specific flags to logs, or add flags to other commands

In the short term, older deployments isn't a blocker. But if the logs
command isn't useable for the primary reason we started these discussions
(viewing the logs of a deployment is hard) we're doing a lot of refactoring
but not actually making the user's life better (build logs going away is
maybe a net positive).

I'd like to see an issue to track that use case to completion. Some of the
arguments we're making stray a bit into "perfect being the enemy of the
good" territory. Users who have deployments fail are not able to quickly
figure out why. Note: this may mean that the logs the deployer generates
are also inadequate.

On Oct 16, 2015, at 9:47 AM, Michail Kargakis notifications@github.com
wrote:

@deads2k https://github.com/deads2k is there any other way except a
generator to have our own flags without maintaining our own version of logs?
I am open to anything that would do that. Maybe pass the flagset into the
factory? Does that feel a right thing to do?


Reply to this email directly or view it on GitHub
#3943 (comment).

@@ -163,6 +168,13 @@ func (o *OpenShiftLogsOptions) RunLog() error {
}
return o.runLogsForBuild(build)

case "dc", "deploymentconfig", "deploymentconfigs":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is spreading, which isn't good. normalization, lowercasing, pluralization, and alias->actual should happen in one place. https://github.com/openshift/origin/pull/4947/files#diff-8040eed03335aa79518a88a4de565decR37 looks promising. Add a TODO to replace the lowercase above with a call to a single helper function, and clear out the singular and alias checks in this function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have that in the RESTMapper already - why do we need a second location.

On Oct 16, 2015, at 10:24 AM, Jordan Liggitt notifications@github.com
wrote:

In pkg/cmd/cli/cmd/logs.go
#3943 (comment):

@@ -163,6 +168,13 @@ func (o *OpenShiftLogsOptions) RunLog() error {
}
return o.runLogsForBuild(build)

  • case "dc", "deploymentconfig", "deploymentconfigs":

this is spreading, which isn't good. normalization, lowercasing,
pluralization, and alias->actual should happen in one place.
https://github.com/openshift/origin/pull/4947/files#diff-8040eed03335aa79518a88a4de565decR37
looks promising. Add a TODO to replace the lowercase above with a call to a
single helper function, and clear out the singular and alias checks in this
function


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/3943/files#r42247743.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true. so...

resourceType := "pods"
...
_, kind, err := mapper.VersionAndKindForResource(resourceType)
if err != nil {
    return err
}
resourceType, _ = meta.KindToResource(kind, false)

and then only check "pods", "buildconfigs", "builds", "deploymentconfigs", etc?

does VersionAndKindForResource handle singlar resources, or does resourcebuilder do magic to make that work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have that in the RESTMapper already - why do we need a second location.

See #4947 (comment)

does VersionAndKindForResource handle singlar resources

Yes, it does.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what comment you're pointing to - for actually resolving names
we should always use RESTMapper full stop. You can round trip RESTMapper
back to the full type if you somehow need that.

On Oct 16, 2015, at 10:47 AM, Michail Kargakis notifications@github.com
wrote:

In pkg/cmd/cli/cmd/logs.go
#3943 (comment):

@@ -163,6 +168,13 @@ func (o *OpenShiftLogsOptions) RunLog() error {
}
return o.runLogsForBuild(build)

  • case "dc", "deploymentconfig", "deploymentconfigs":

We have that in the RESTMapper already - why do we need a second location.

See #4947 (comment)
#4947 (comment)

does VersionAndKindForResource handle singlar resources

Yes, it does.


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/3943/files#r42250620.

@0xmichalis
Copy link
Contributor Author

I would add a flag to logs w/o a second thought but we first need to make
it resource-agnostic upstream,
and see how that flag would fit w/o having our own logs version.

We should definitely have an issue for older deployment logs (I will open
one), the server code is already
there, we just need to find a sane way to expose it to the client w/o
causing future maintainance headaches:)

On Fri, Oct 16, 2015 at 4:21 PM, Clayton Coleman notifications@github.com
wrote:

For older deployments, you have three options, add generic flags to logs,
add specific flags to logs, or add flags to other commands

In the short term, older deployments isn't a blocker. But if the logs
command isn't useable for the primary reason we started these discussions
(viewing the logs of a deployment is hard) we're doing a lot of refactoring
but not actually making the user's life better (build logs going away is
maybe a net positive).

I'd like to see an issue to track that use case to completion. Some of the
arguments we're making stray a bit into "perfect being the enemy of the
good" territory. Users who have deployments fail are not able to quickly
figure out why. Note: this may mean that the logs the deployer generates
are also inadequate.

On Oct 16, 2015, at 9:47 AM, Michail Kargakis notifications@github.com
wrote:

@deads2k https://github.com/deads2k is there any other way except a
generator to have our own flags without maintaining our own version of
logs?
I am open to anything that would do that. Maybe pass the flagset into the
factory? Does that feel a right thing to do?


Reply to this email directly or view it on GitHub
#3943 (comment).


Reply to this email directly or view it on GitHub
#3943 (comment).

@0xmichalis
Copy link
Contributor Author

Opened #5163

@smarterclayton
Copy link
Contributor

Rename this issue something more generic so I don't accidentally claim this
when writing up the release notes.

On Oct 16, 2015, at 10:36 AM, Michail Kargakis notifications@github.com
wrote:

Opened #5163 #5163


Reply to this email directly or view it on GitHub
#3943 (comment).

@0xmichalis 0xmichalis changed the title Support viewing the logs for a deployment View the logs for the latest deployment of a config Oct 16, 2015
@deads2k
Copy link
Contributor

deads2k commented Oct 16, 2015

@Kargakis speak of the devil: F1016 11:43:50.298138 25312 helpers.go:259] err accessing flag port for command expose: trying to get int value of flag of type string in the rebase while running expose.

@0xmichalis
Copy link
Contributor Author

@Kargakis speak of the devil: F1016 11:43:50.298138 25312 helpers.go:259] err accessing flag port for command expose: trying to get int value of flag of type string in the rebase while running expose.

Yes, I have changed --port upstream to be a string so that it would default to an empty string instead of -1 which was the default value while it was an int. I have added a TODO somewhere but I think that PR is still not merged. Sorry:)

@0xmichalis
Copy link
Contributor Author

@0xmichalis
Copy link
Contributor Author

any more comments here or should I squash?

@smarterclayton
Copy link
Contributor

Is the upstream commit still necessary?

@smarterclayton
Copy link
Contributor

[test]

@0xmichalis
Copy link
Contributor Author

Is the upstream commit still necessary?

Not after the rebase.

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 6d732b1

@0xmichalis
Copy link
Contributor Author

rebased on top of latest master and removed the UPSTREAM commit

oc logs dc/ continues to work fine

@smarterclayton
Copy link
Contributor

LGTM [merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/5962/) (Image: devenv-rhel7_2503)

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 6d732b1

openshift-bot pushed a commit that referenced this pull request Oct 20, 2015
@openshift-bot openshift-bot merged commit 805aba6 into openshift:master Oct 20, 2015
@0xmichalis 0xmichalis deleted the deploy-logs branch October 21, 2015 07:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants