You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand that Go doesn't want to try the connection again if it fails because this is a POST (see here) since this is to do a PUT object. However, I think this happens because localstack only accepts a few connections to arrive at once (i.e. the default is probably 5). I tried to time my connections, though, and that doesn't help. I still get the same error. I even tried to make the connections one after the other and upload the 28 objects serially, one after the other as fast as possible, and even that fails once in a while (it works most of time, though—a good solution for now).
I'm wondering whether there is a parameter in localstack that I can easily tweak to increase the number of connections that can happen quickly. With our bigger servers (96 processors) it's much more advantageous to write all 28 objects in parallel and it would be best if we could use localstack to test our code.
Here is a sample code one can use to reproduce the problem mentioned here:
package main
import (
"fmt"
"io"
"os"
"time"
"github.com/minio/minio-go"
)
func main() {
var mc *minio.Client
var err error
var size int64
var count int = 28
var outputRC []io.ReadCloser
var inputPipe []*os.File
mc, err = minio.New("192.168.2.95:4572", "localstack", "localstack", false)
if err != nil {
fmt.Printf("error: minio.New(): %v\n", err)
os.Exit(1)
}
outputRC = make([]io.ReadCloser, count)
inputPipe = make([]*os.File, count)
// create pipes which are used to send data to the S3 objects
for i := 0; i < count; i++ {
// reader, writer, err = os.Pipe()
outputRC[i], inputPipe[i], err = os.Pipe()
if err != nil {
fmt.Printf("error: os.Pipe() #%d: %v\n", i, err)
os.Exit(1)
}
}
// now setup the readers (read pipe output and save in S3 objects)
for i := 0; i < count; i++ {
//time.Sleep(100 * time.Millisecond) // this helps to some extend, but we still get many errors, even with a very long sleep (i.e. 1 full second!)
idx := i
go func() {
filename := fmt.Sprintf("whatever-%d.mp3", idx)
size, err = mc.PutObject("bucket-name", filename, outputRC[idx], -1, minio.PutObjectOptions{ContentType:"audio/mpeg"})
if err != nil {
fmt.Printf("error: mc.PutObject() #%d: %v\n", idx, err)
//os.Exit(1)
}
}()
}
//time.Sleep(5 * time.Second)
for i := 0; i < count; i++ {
fmt.Fprintf(inputPipe[i], "FILE #%d\n", i)
}
//time.Sleep(10 * time.Second)
for i := 0; i < count; i++ {
inputPipe[i].Close()
}
time.Sleep(60 * time.Second)
}
The text was updated successfully, but these errors were encountered:
Thanks for reporting and for providing the self-contained example @AlexisWilke . This issue should be fixed in #1986 - can you please give it a try with the latest version of the Docker image? Please report here if the problem persists. Thanks
I have code which wants to opens 28 connections "simultaneously" (very quickly, for sure) and I get the following error every time:
I understand that Go doesn't want to try the connection again if it fails because this is a
POST
(see here) since this is to do a PUT object. However, I think this happens because localstack only accepts a few connections to arrive at once (i.e. the default is probably 5). I tried to time my connections, though, and that doesn't help. I still get the same error. I even tried to make the connections one after the other and upload the 28 objects serially, one after the other as fast as possible, and even that fails once in a while (it works most of time, though—a good solution for now).I'm wondering whether there is a parameter in localstack that I can easily tweak to increase the number of connections that can happen quickly. With our bigger servers (96 processors) it's much more advantageous to write all 28 objects in parallel and it would be best if we could use localstack to test our code.
Here is a sample code one can use to reproduce the problem mentioned here:
The text was updated successfully, but these errors were encountered: