Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copy of 16M+ file to container causes invalid data frame error #460

Closed
maorfr opened this issue Sep 4, 2018 · 10 comments · Fixed by kubernetes/kubernetes#70999
Closed

Comments

@maorfr
Copy link

maorfr commented Sep 4, 2018

Hello,

We have been trying to implement a copy mechanism in to a container.
We are using this code:

stdin := bytes.NewReader(byteArray)
req := clientset.Core().RESTClient().Post().
	Resource("pods").
	Name("podName").
	Namespace("namespace").
	SubResource("exec")
scheme := runtime.NewScheme()
check(err)

parameterCodec := runtime.NewParameterCodec(scheme)
req.VersionedParams(&core_v1.PodExecOptions{
	Command:   strings.Fields("cp /dev/stdin /tmp/file"),
	Container: containerName,
	Stdin:     stdin != nil,
	Stdout:    true,
	Stderr:    true,
	TTY:       false,
}, parameterCodec)

exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
check(err)

var stdout, stderr bytes.Buffer
err = exec.Stream(remotecommand.StreamOptions{
	Stdin:  stdin,
	Stdout: &stdout,
	Stderr: &stderr,
	Tty:    false,
})
check(err)

The complete code can be found here

We noticed that when trying to copy a file which is 16M and up, the action is not successful and we get an output of E0904 12:56:33.953212 15894 v2.go:105] invalid data frame.

Any file that is under 16M (16777215 and less) is being copied without any error.

Can anyone help on this?

Thanks in advance!

@deads2k
Copy link
Contributor

deads2k commented Sep 4, 2018

@liggitt size seems suspicious compared to recent fixes.

@liggitt
Copy link
Member

liggitt commented Sep 4, 2018

What version of client-go and apiserver is this seen with?

@maorfr
Copy link
Author

maorfr commented Sep 5, 2018

@deads2k @liggitt thanks for the quick response!

Current versions (from glide.yaml):

- package: k8s.io/api
  version: kubernetes-1.11.2
  subpackages:
  - core/v1
- package: k8s.io/apimachinery
  version: kubernetes-1.11.2
  subpackages:
  - pkg/apis/meta/v1
- package: k8s.io/client-go
  version: kubernetes-1.11.2
  subpackages:
  - kubernetes
  - rest
  - tools/clientcmd

But we have also tried it with:

- package: k8s.io/api
  subpackages:
  - core/v1
- package: k8s.io/apimachinery
  subpackages:
  - pkg/apis/meta/v1
- package: k8s.io/client-go
  version: v8.0.0
  subpackages:
  - kubernetes
  - rest
  - tools/clientcmd

And got the same results.

API Server image is: k8s.gcr.io/kube-apiserver:v1.11.1

Thanks again :)

@maorfr
Copy link
Author

maorfr commented Nov 13, 2018

Hey @deads2k, @liggitt,
Any news on this one?
Thanks!

@liggitt
Copy link
Member

liggitt commented Nov 13, 2018

are you able to recreate on a version of the server with kubernetes/kubernetes#67902 fixed (v1.12.0+, v1.11.4+)?

@liggitt
Copy link
Member

liggitt commented Nov 13, 2018

it looks like the bytes.Buffer you are providing as input implements a WriteTo function which io.Copy uses to attempt to write the entire buffer to the stream in one call, which results in a data frame that is too large.

We can fix this by wrapping the provided stdin stream in an io.Reader that only implements Read so that it always copies via a buffer.

In the meantime, you can do the same to work around the issue, instead of providing the bytes.Buffer directly as stdin

@liggitt
Copy link
Member

liggitt commented Nov 13, 2018

fixed in kubernetes/kubernetes#70999

@maorfr
Copy link
Author

maorfr commented Nov 14, 2018

Thanks @liggitt!

That solves the problem!

@maorfr
Copy link
Author

maorfr commented Nov 14, 2018

Would you like me to close the issue or leave it open?

@liggitt
Copy link
Member

liggitt commented Nov 14, 2018

Would you like me to close the issue or leave it open?

you can leave it, it'll get closed when the pull request merges

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants