New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storagenode: decline uploads when there are too many live requests #2397
Changes from 3 commits
4659477
89031cb
8f0105a
4593416
492c009
48ef87b
5bc671c
f6895c0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,6 +7,7 @@ import ( | |
"context" | ||
"io" | ||
"os" | ||
"sync/atomic" | ||
"time" | ||
|
||
"github.com/golang/protobuf/ptypes" | ||
|
@@ -55,6 +56,7 @@ type OldConfig struct { | |
// Config defines parameters for piecestore endpoint. | ||
type Config struct { | ||
ExpirationGracePeriod time.Duration `help:"how soon before expiration date should things be considered expired" default:"48h0m0s"` | ||
MaxConcurrentRequests int `help:"how many concurrent requests are allowed, before uploads are rejected." default:"30"` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How would SNOs know what value to configure here? Is there a way for the storage node to self-diagnose and determine if it is overloaded or not? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Probably with some kind of benchmarking by some other party. The issue is that it's not just about the storage node itself, but also about the network it's able to deliver. I guess we could try monitoring bandwidth usage and when it isn't increasing anymore or load or memory usage etc. But these all are much more complicated solutions than "this is how much this storage node can serve". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it’s totally fine to have a default, if someone notices a lot of issues he just can tune it down. We had the same mechanic in V2 and that worked fine. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, I'm not sure what to put as limits for now. Is there an easy way to test how much we can handle? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My Server Node was handling 15-20 Requests max per second, when fully cached. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I could run a few test uploads on my local Network to see what i can come up with. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if the storagenodes could potentially obtain data from one or more satellites to help it decide how many requests it can handle and adjust the max. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. With a dynamic scaling that is just adding another layer of potential issues @phutchins There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let me know when you get a number from testing @stefanbenten There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will test that after lunch 👍 |
||
|
||
Monitor monitor.Config | ||
Sender orders.SenderConfig | ||
|
@@ -74,6 +76,8 @@ type Endpoint struct { | |
orders orders.DB | ||
usage bandwidth.DB | ||
usedSerials UsedSerials | ||
|
||
liveRequests int32 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we should make this be one of the first struct fields, so arm alignment is better guaranteed (https://golang.org/pkg/sync/atomic/#pkg-note-BUG) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wasn't this an issue for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh yeah good point, maybe worth testing on an arm device |
||
} | ||
|
||
// NewEndpoint creates a new piecestore endpoint. | ||
|
@@ -91,13 +95,18 @@ func NewEndpoint(log *zap.Logger, signer signing.Signer, trust *trust.Pool, moni | |
orders: orders, | ||
usage: usage, | ||
usedSerials: usedSerials, | ||
|
||
liveRequests: 0, | ||
}, nil | ||
} | ||
|
||
// Delete handles deleting a piece on piece store. | ||
func (endpoint *Endpoint) Delete(ctx context.Context, delete *pb.PieceDeleteRequest) (_ *pb.PieceDeleteResponse, err error) { | ||
defer mon.Task()(&ctx)(&err) | ||
|
||
atomic.AddInt32(&endpoint.liveRequests, 1) | ||
defer atomic.AddInt32(&endpoint.liveRequests, -1) | ||
|
||
if delete.Limit.Action != pb.PieceAction_DELETE { | ||
return nil, Error.New("expected delete action got %v", delete.Limit.Action) // TODO: report grpc status unauthorized or bad request | ||
} | ||
|
@@ -128,6 +137,15 @@ func (endpoint *Endpoint) Delete(ctx context.Context, delete *pb.PieceDeleteRequ | |
func (endpoint *Endpoint) Upload(stream pb.Piecestore_UploadServer) (err error) { | ||
ctx := stream.Context() | ||
defer mon.Task()(&ctx)(&err) | ||
|
||
liveRequests := atomic.AddInt32(&endpoint.liveRequests, 1) | ||
defer atomic.AddInt32(&endpoint.liveRequests, -1) | ||
|
||
if int(liveRequests) > endpoint.config.MaxConcurrentRequests { | ||
endpoint.log.Error("upload rejected, too many requests", zap.Int32("live requests", liveRequests)) | ||
return status.Error(codes.Unavailable, "storage node overloaded") | ||
} | ||
|
||
startTime := time.Now().UTC() | ||
|
||
// TODO: set connection timeouts | ||
|
@@ -321,6 +339,10 @@ func (endpoint *Endpoint) Upload(stream pb.Piecestore_UploadServer) (err error) | |
func (endpoint *Endpoint) Download(stream pb.Piecestore_DownloadServer) (err error) { | ||
ctx := stream.Context() | ||
defer mon.Task()(&ctx)(&err) | ||
|
||
atomic.AddInt32(&endpoint.liveRequests, 1) | ||
defer atomic.AddInt32(&endpoint.liveRequests, -1) | ||
|
||
startTime := time.Now().UTC() | ||
|
||
// TODO: set connection timeouts | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving testplanet to 100, because having this lower will probably break some tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth testing where the limit is, in another PR of course.