Skip to content

Commit

Permalink
fix: handle calling Contents on massive files (#6772)
Browse files Browse the repository at this point in the history
* fix: handle calling Contents on massive files

We have a message size limit enforced by the gRPC connection - however,
we then go to host a HTTP server on that connection, and we weren't
performing any restrictions on the size of the `Write`s.

To resolve this, for any `Write` that is larger than (an adjusted for
headers) size limit, we split it into multiple `Write` calls which will
split it into several `BytesMessage`s by the `grpchijack` package.

Note: no such transformation is required for `Read`s, since go permits
short-reads, but does not permit short-writes (so `grpchijack` can
return under the requested number of bytes).

Arguably, this should be implemented upstream, however, it's tricky.
Currently, the only upstream use of `grpchijack` is to host the session,
which performs it's own message shortening procedures, which seems very
fair - this patch just creates our own similar generic implementation
for HTTP serving.

Signed-off-by: Justin Chadwell <me@jedevc.com>

* chore: add changelog

Signed-off-by: Justin Chadwell <me@jedevc.com>

---------

Signed-off-by: Justin Chadwell <me@jedevc.com>
  • Loading branch information
jedevc committed Mar 5, 2024
1 parent 7d606df commit 18c8c42
Show file tree
Hide file tree
Showing 3 changed files with 44 additions and 4 deletions.
6 changes: 6 additions & 0 deletions .changes/unreleased/Fixed-20240228-155029.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
kind: Fixed
body: Fix panic in Contents for massive files
time: 2024-02-28T15:50:29.369831359Z
custom:
Author: jedevc
PR: "6772"
13 changes: 10 additions & 3 deletions core/integration/file_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -260,9 +260,16 @@ func TestFileExport(t *testing.T) {
maxChunkSize := buildkit.MaxFileContentsChunkSize
fileSizeBytes := maxChunkSize*4 + 1 // +1 so it's not an exact number of chunks, to ensure we cover that case

_, err := c.Container().From(alpineImage).WithExec([]string{"sh", "-c",
fmt.Sprintf("dd if=/dev/zero of=/file bs=%d count=1", fileSizeBytes)}).
File("/file").Export(ctx, "some-pretty-big-file")
file := c.Container().
From(alpineImage).
WithExec([]string{"sh", "-c", fmt.Sprintf("dd if=/dev/zero of=/file bs=%d count=1", fileSizeBytes)}).
File("/file")

dt, err := file.Contents(ctx)
require.NoError(t, err)
require.EqualValues(t, fileSizeBytes, len(dt))

_, err = file.Export(ctx, "some-pretty-big-file")
require.NoError(t, err)

stat, err := os.Stat(filepath.Join(wd, "some-pretty-big-file"))
Expand Down
29 changes: 28 additions & 1 deletion engine/server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import (
"sync"
"time"

"github.com/containerd/containerd/defaults"
"github.com/dagger/dagger/analytics"
"github.com/dagger/dagger/auth"
"github.com/dagger/dagger/core"
Expand Down Expand Up @@ -157,7 +158,7 @@ func (srv *DaggerServer) ServeClientConn(
}()
srv.clientMu.Unlock()

conn = newLogicalDeadlineConn(nopCloserConn{conn})
conn = newLogicalDeadlineConn(splitWriteConn{nopCloserConn{conn}, defaults.DefaultMaxSendMsgSize * 95 / 100})
l := &singleConnListener{conn: conn, closeCh: make(chan struct{})}
go func() {
<-ctx.Done()
Expand Down Expand Up @@ -452,3 +453,29 @@ func (c *withDeadlineConn) SetWriteDeadline(t time.Time) error {

return nil
}

type splitWriteConn struct {
net.Conn
maxMsgSize int
}

func (r splitWriteConn) Write(b []byte) (n int, err error) {
for {
if len(b) == 0 {
return
}

var bnext []byte
if len(b) > r.maxMsgSize {
b, bnext = b[:r.maxMsgSize], b[r.maxMsgSize:]
}

n2, err := r.Conn.Write(b)
n += n2
if err != nil {
return n, err
}

b = bnext
}
}

0 comments on commit 18c8c42

Please sign in to comment.