Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set the maximum open connections limit in PG and MySQL target configs #10558

Merged
merged 3 commits into from
Sep 25, 2020

Conversation

Praveenrajmani
Copy link
Contributor

Description

As the bulk/recursive delete will require multiple connections to open at once,
The default open connections limit will be reached which results in the following error

FATAL: sorry, too many clients already

By setting the open connections to a reasonable value - 2, We ensure that the max open connections
will not be exhausted and lie under bounds.

The queries are simple inserts/updates/deletes which is operational and sufficient with the
maximum open connection limit being 2.

Motivation and Context

Fixes #10553

How to test this PR?

The steps are provided in #10554

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • Fixes a regression (If yes, please add commit-id or PR # here)
  • Documentation needed
  • Unit tests needed

@Praveenrajmani
Copy link
Contributor Author

Can we add MaxOpenConns as a config setting @harshavardhana ? So that, the user can set their own limits.

@harshavardhana
Copy link
Member

Can we add MaxOpenConns as a config setting @harshavardhana ? So that, the user can set their own limits.

Yes we can @Praveenrajmani

@Praveenrajmani Praveenrajmani marked this pull request as draft September 24, 2020 18:02
@Praveenrajmani Praveenrajmani changed the title Set the maximum open connections limit in PG and SQL target configs [WIP] Set the maximum open connections limit in PG and SQL target configs Sep 24, 2020
@Praveenrajmani Praveenrajmani changed the title [WIP] Set the maximum open connections limit in PG and SQL target configs [WIP] Set the maximum open connections limit in PG and MySQL target configs Sep 24, 2020
As the bulk/recursive delete will require multiple connections to open at an instance,
The default open connections limit will be reached which results in the following error

```FATAL:  sorry, too many clients already```

By setting the open connections to a reasonable value - `2`, We ensure that the max open connections
will not be exhausted and lie under bounds.

The queries are simple inserts/updates/deletes which is operational and sufficient with the
maximum open connection limit being 2.

Fixes minio#10553
@Praveenrajmani Praveenrajmani changed the title [WIP] Set the maximum open connections limit in PG and MySQL target configs Set the maximum open connections limit in PG and MySQL target configs Sep 24, 2020
@Praveenrajmani Praveenrajmani marked this pull request as ready for review September 24, 2020 19:07
@Praveenrajmani
Copy link
Contributor Author

The PR is ready for review. PTAL @harshavardhana

go.sum Show resolved Hide resolved
Copy link
Contributor

@kannappanr kannappanr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@minio-trusted
Copy link
Contributor

Mint Automation

Test Result
mint-large-bucket.sh ✔️
mint-fs.sh ✔️
mint-gateway-s3.sh ✔️
mint-erasure.sh ✔️
mint-dist-erasure.sh ✔️
mint-zoned.sh ✔️
mint-gateway-nas.sh ✔️
mint-gateway-azure.sh more...

10558-57b9edc/mint-gateway-azure.sh.log:

Running with
SERVER_ENDPOINT:      minio-dev6.minio.io:30164
ACCESS_KEY:           minioazure
SECRET_KEY:           ***REDACTED***
ENABLE_HTTPS:         0
SERVER_REGION:        us-east-1
MINT_DATA_DIR:        /mint/data
MINT_MODE:            full
ENABLE_VIRTUAL_STYLE: 0

To get logs, run 'docker cp 8c40f82e140f:/mint/log /tmp/mint-logs'

(1/15) Running aws-sdk-go tests ... done in 8 seconds
(2/15) Running aws-sdk-java tests ... done in 2 seconds
(3/15) Running aws-sdk-php tests ... done in 1 minutes and 8 seconds
(4/15) Running aws-sdk-ruby tests ... done in 19 seconds
(5/15) Running awscli tests ... done in 2 minutes and 44 seconds
(6/15) Running healthcheck tests ... done in 0 seconds
(7/15) Running mc tests ... done in 3 minutes and 50 seconds
(8/15) Running minio-dotnet tests ... done in 1 minutes and 36 seconds
(9/15) Running minio-go tests ... FAILED in 1 minutes and 49 seconds
{
  "args": {
    "bucketName": "minio-go-test-nknho10o5soblfwg",
    "objectName": "test-object",
    "opts": "",
    "size": -1
  },
  "duration": 1032,
  "function": "PutObject(bucketName, objectName, reader,size,opts)",
  "message": "Unexpected size",
  "name": "minio-go: testPutObjectStreaming",
  "status": "FAIL"
}
(9/15) Running minio-java tests ... done in 10 minutes and 38 seconds
(10/15) Running minio-js tests ... FAILED in 46 seconds
{
  "name": "minio-js",
  "function": "\"after all\" hook in \"functional tests\"",
  "duration": 81,
  "status": "FAIL",
  "error": "S3Error: The bucket you tried to delete is not empty at Object.parseError (node_modules/minio/dist/main/xml-parsers.js:86:11) at /mint/run/core/minio-js/node_modules/minio/dist/main/transformers.js:156:22 at DestroyableTransform._flush (node_modules/minio/dist/main/transformers.js:80:10) at DestroyableTransform.prefinish (node_modules/readable-stream/lib/_stream_transform.js:129:10) at prefinish (node_modules/readable-stream/lib/_stream_writable.js:611:14) at finishMaybe (node_modules/readable-stream/lib/_stream_writable.js:620:5) at endWritable (node_modules/readable-stream/lib/_stream_writable.js:643:3) at DestroyableTransform.Writable.end (node_modules/readable-stream/lib/_stream_writable.js:571:22) at IncomingMessage.onend (_stream_readable.js:682:10) at endReadableNT (_stream_readable.js:1252:12) at processTicksAndRejections (internal/process/task_queues.js:80:21)"
}
(10/15) Running minio-py tests ... done in 19 minutes and 14 seconds
(11/15) Running s3cmd tests ... done in 2 minutes and 25 seconds
(12/15) Running s3select tests ... done in 41 seconds
(13/15) Running security tests ... done in 0 seconds

Executed 13 out of 15 tests successfully.

Deleting image on docker hub
Deleting image locally
Error: No such image: minio/minio:10558-57b9edc

@harshavardhana harshavardhana merged commit b880796 into minio:master Sep 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bulk removal of records does not remove rows in namespace postgres table
4 participants