Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxListenersExceededWarning: Possible EventEmitter memory leak detected #1224

Closed
ffxsam opened this issue May 3, 2021 · 11 comments
Closed

Comments

@ffxsam
Copy link

ffxsam commented May 3, 2021

Bug Report

Current Behavior

When running sls offline, I often see this error message:

(node:55885) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 error listeners added to [ClientRequest]. Use emitter.setMaxListeners() to increase limit

Sample Code

serverless.ts
/* eslint-disable max-len */
import type { AWS } from '@serverless/typescript';
import {
  adminGetUsers,
  appsyncResolver,
  checkV1User,
  createEmbed,
  createImage,
  createTag,
  createTrack,
  deleteTag,
  deleteTracks,
  duplicateReel,
  getAccountInfo,
  getCountries,
  getEmbed,
  getGroups,
  getReels,
  getSessions,
  getTags,
  getTracks,
  migrateUser,
  sendEmail,
  setTags,
  updateAccountInfo,
  updateEmbed,
  updateImageProcessingStatus,
  updateShareLink,
  updateTag,
  updateTranscodeStatus,
} from './src/functions';

const serverlessConfiguration: AWS = {
  service: 'api',
  frameworkVersion: '2',
  custom: {
    webpack: {
      webpackConfig: './webpack.config.js',
      includeModules: true,
    },
    'serverless-offline': {
      httpPort: 9001,
      ignoreJWTSignature: true,
    },
    apiHostnames: {
      dev: 'apiv2-dev.mysite.com',
      prod: 'apiv2.mysite.com',
    },
    customDomain: {
      domainName: '${self:custom.apiHostnames.${self:provider.stage}}',
      basePath: '',
      stage: '${self:provider.stage}',
      createRoute53Record: true,
      apiType: 'http',
      certificateName: '*.mysite.com',
      endpointType: 'regional',
    },
    chargebeeSite: {
      dev: 'mysite-test',
      prod: 'mysite-test', // TODO: change this before launch!
    },
  },
  useDotenv: true,
  variablesResolutionMode: '20210326',
  package: {
    individually: true,
  },
  plugins: [
    'serverless-webpack',
    'serverless-offline',
    'serverless-domain-manager',
  ],
  provider: {
    name: 'aws',
    runtime: 'nodejs12.x',
    memorySize: 256,
    versionFunctions: false,
    // @ts-ignore
    logRetentionInDays: '${env:LOG_RETENTION_DAYS}',
    stage: '${opt:stage, "dev"}',
    apiGateway: {
      minimumCompressionSize: 1024,
      shouldStartNameWithService: true,
    },
    environment: {
      AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
      SENTRY_DSN: '${env:SENTRY_DSN}',
      DB_SECRET_STORE_ARN: '${env:DB_SECRET_STORE_ARN}',
      DB_NAME: '${env:DB_NAME}',
      DB_ARN: '${env:DB_ARN}',
      STORAGE_URL: '${env:STORAGE_URL}',
    },
    lambdaHashingVersion: '20201221',
    tracing: {
      lambda: true,
    },
    httpApi: {
      authorizers: {
        cognitoAuthorizer: {
          identitySource: '$request.header.Authorization',
          issuerUrl:
            'https://cognito-idp.us-east-1.amazonaws.com/${env:USER_POOL_ID}',
          audience: [
            '${env:USER_POOL_CLIENT_ID}',
            // To satisfy a bug in serverless-offline
            'dummy',
          ],
        },
      },
      cors: true,
    },
    iam: {
      role: {
        statements: [
          {
            Effect: 'Allow',
            Action: [
              'rds-data:ExecuteSql',
              'rds-data:ExecuteStatement',
              'rds-data:BatchExecuteStatement',
              'rds-data:BeginTransaction',
              'rds-data:RollbackTransaction',
              'rds-data:CommitTransaction',
            ],
            Resource: '*',
          },
          {
            Effect: 'Allow',
            Action: 'secretsmanager:GetSecretValue',
            Resource: '*',
          },
          {
            Effect: 'Allow',
            Action: 'lambda:InvokeFunction',
            Resource: {
              'Fn::Sub':
                'arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:migrate-user-${self:provider.stage}-checkUser',
            },
          },
          {
            Effect: 'Allow',
            Action: 'states:StartExecution',
            Resource: {
              'Fn::Sub':
                'arn:aws:states:${AWS::Region}:${AWS::AccountId}:stateMachine:migrateUser-${self:provider.stage}',
            },
          },
        ],
      },
    },
  },
  functions: {
    adminGetUsers,
    appsyncResolver,
    checkV1User,
    createEmbed,
    createImage,
    createTag,
    createTrack,
    deleteTag,
    deleteTracks,
    duplicateReel,
    getAccountInfo,
    getCountries,
    getEmbed,
    getGroups,
    getReels,
    getSessions,
    getTags,
    getTracks,
    migrateUser,
    sendEmail,
    setTags,
    updateAccountInfo,
    updateEmbed,
    updateImageProcessingStatus,
    updateShareLink,
    updateTag,
    updateTranscodeStatus,
  },
};

module.exports = serverlessConfiguration;

Expected behavior/code

I wouldn't expect to see this warning.

Environment

  • serverless version: 2.38.0
  • serverless-offline version: 6.9.0
  • node.js version: 12.20.1
  • OS: macOS 10.15.7
@bjernie
Copy link

bjernie commented May 8, 2021

I have the same problem

serverless: 2.39.1
serverless-offline: 7.0.0
node: 15.12.0
OS: macOS 11.3

@nicolas-lescop
Copy link

On macOS with this additional configuration I've not yet this issue.

custom:
    serverless-offline:
        useChildProcesses: true

But on linux the memory gradually fills up until the freeze.

@ffxsam
Copy link
Author

ffxsam commented Jun 10, 2021

IMO it shouldn't take extra configuration to avoid getting memory leak warnings. Maybe this option should be set by default, out of the box?

@EduardMcfly
Copy link
Contributor

EduardMcfly commented Jun 16, 2021

@ffxsam You are absolutely right, it is not the best solution, so I will work on the problem thoroughly

@ffxsam
Copy link
Author

ffxsam commented Jun 16, 2021

@EduardMcfly Thanks, Eduard! 👍 It'll save a lot of time for people Googling around trying to figure out how to get rid of that error message. 😉

@Alec2435
Copy link

@EduardMcfly It appears this issue may be upstream to serverless-offline. I've had this same issue forever, to the point where it will crash my serverless offline locally, but today, I managed to replicate the issue in Lambda itself. Given that I have my doubts this issue is shared between your codebase and the Lambda Node 14 runtime codebase, it seems like this issue must be from serverless-webpack.

I've spent some time investigating before, and I think it has something to do with webpack causing a memory leak (which means those listeners that get mentioned aren't being destroyed as well). I'm not sure how this is happening though so I could definitely be wrong.

@EduardMcfly
Copy link
Contributor

@Alec2435 I was analyzing and this problem is due to serveless-offline, when using serverless-webpack the loss of memory increases much more, but the origin of this problem is from the serverless-offline or from any of the dependencies that are used

const { [this.#handlerName]: handler } = await import(this.#handlerPath)

So far I am concluding that it may be due to the cache of the dependencies, making profilers this is what is increasing in each call of the handler

@EduardMcfly
Copy link
Contributor

@EduardMcfly It appears this issue may be upstream to serverless-offline. I've had this same issue forever, to the point where it will crash my serverless offline locally, but today, I managed to replicate the issue in Lambda itself. Given that I have my doubts this issue is shared between your codebase and the Lambda Node 14 runtime codebase, it seems like this issue must be from serverless-webpack.

I've spent some time investigating before, and I think it has something to do with webpack causing a memory leak (which means those listeners that get mentioned aren't being destroyed as well). I'm not sure how this is happening though so I could definitely be wrong.

Thanks for your answer, it was really very useful

@vlechemin
Copy link

Do you have any news? It's still happening, thanks!

@makivlach
Copy link

custom:
  serverless-offline:
    allowCache: true # Important: This will prevent serverless-offline from eating all of your memory
    useChildProcesses: true

solved the problem for me on Linux.

@dnalborczyk
Copy link
Collaborator

this should be fixed with the recent release of v9. worker threads are now the default with proper memory management. if you want to reload handlers for development use the new --reloadHandler flag. --allowCache has been removed.

please open a new issue if this causes still any problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants