-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade from 5.9.23 -> anything > 5.9.24 Breaks my serverless lambda app using mongoose. #9423
Comments
I have tried my best to reproduce the error with no luck.... The only thing I can see is that a disconnected event is emitted. I added a console.log statement to the 'buffer' event as well but saw no. |
@natac13 can you please show how you're calling Also, why do you have that |
To be honest I forget. And I have removed it after posting this. So now I just have the normal model pattern. export default mongoose.model('Member', MemberSchema)
No problem. import mongoose from 'mongoose'
import { IS_PROD, IS_DEV, IS_TEST } from '../common/constants'
import logger from './logger'
const options = {
user: process.env.DB_USER,
pass: process.env.DB_PASS,
useNewUrlParser: true,
useFindAndModify: false, // use findAndUpdate instead of modify
useCreateIndex: true, // uses createIndex over ensureIndex
autoIndex: !IS_PROD, // to create the indexes layed out in model files on start up
poolSize: 10, // Maintain up to 10 socket connections
connectTimeoutMS: 5000, // Give up initial connection after 5 seconds
socketTimeoutMS: 33000, // Close sockets after 33 seconds of inactivity
useUnifiedTopology: true,
// Buffering means mongoose will queue up operations if it gets
// disconnected from MongoDB and send them when it reconnects.
// With serverless, better to fail fast if not connected.
bufferCommands: false, // Disable mongoose buffering
bufferMaxEntries: 0 // and MongoDB driver buffering
}
interface ConnectToDB {
(): Promise<typeof mongoose | null>
}
let cachedDb: typeof mongoose | null = null
const connectToDB: ConnectToDB = async () => {
if (IS_TEST) {
return Promise.resolve(cachedDb)
}
if (cachedDb === null) {
return mongoose.connect(process.env.MONGO_URI, options).then((db) => {
cachedDb = db
if (IS_DEV) {
mongoose.set('debug', true)
logger.info(`⌨️ Dev Server Connected to Dev Database.`)
} else if (IS_PROD) {
logger.info(`🌎 Production Server Connected to Prod Database.`)
}
return db
})
} else {
logger.info(`♻️ Recycle DB Connection ♻️`)
return Promise.resolve(cachedDb)
}
}
export default connectToDB Which gets used here: import { APIGatewayProxyHandler } from 'aws-lambda'
import { IS_DEV } from '../common/constants'
import './aws/dynamodb'
import { corsOptions } from './cors'
import dbConnection from './dbConnection'
import { server } from './server'
if (IS_DEV) {
require('dotenv').config()
}
export const handler: APIGatewayProxyHandler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false
dbConnection().then(() => {
server.createHandler({
cors: corsOptions
})(event, context, callback)
})
} The const server = new ApolloServer({
typeDefs,
resolvers,
dataSources,
plugins: [cookiePlugin],
context: apolloContext,
introspection: true,
playground: IS_DEV
? {
settings: {
'request.credentials': 'include',
'editor.fontSize': 16,
// @ts-ignore
'schema.polling.interval': 20000,
'schema.polling.enable': false
}
}
: false
}) I put the models on the apollo |
One possible explanation is the race condition in your if (cachedDb === null) {
cachedDb = mongoose.connect(process.env.MONGO_URI, options).then((db) => {
if (IS_DEV) {
mongoose.set('debug', true)
logger.info(`⌨️ Dev Server Connected to Dev Database.`)
} else if (IS_PROD) {
logger.info(`🌎 Production Server Connected to Prod Database.`)
}
return db
})
} else {
logger.info(`♻️ Recycle DB Connection ♻️`)
return Promise.resolve(cachedDb)
} However, I don't think that issue explains the behavior you're seeing, I think you're more likely seeing an instance of #9179. When you say that "found that when the lambda was hanging", does the Lambda respond after approximately 10 seconds, or does it take longer than that? |
Sorry for my confusion, but do I not want to wait until the connection succeeds to set Is the reason is the following from the docs?
Your Suggestion if (cachedDb === null) {
cachedDb = mongoose.connect(process.env.MONGO_URI, options).then((db) => {
if (IS_DEV) {
mongoose.set('debug', true)
logger.info(`⌨️ Dev Server Connected to Dev Database.`)
} else if (IS_PROD) {
logger.info(`🌎 Production Server Connected to Prod Database.`)
}
return db
})
} else {
logger.info(`♻️ Recycle DB Connection ♻️`)
return Promise.resolve(cachedDb)
} Vs if (cachedDb === null) {
return mongoose.connect(process.env.MONGO_URI, options).then((db) => {
cachedDb = db
if (IS_DEV) {
mongoose.set('debug', true)
logger.info(`⌨️ Dev Server Connected to Dev Database.`)
} else if (IS_PROD) {
logger.info(`🌎 Production Server Connected to Prod Database.`)
}
return db
})
} else {
logger.info(`♻️ Recycle DB Connection ♻️`)
return Promise.resolve(cachedDb)
}
I agree, since I have tried with different types of connections.
The behaviour is such that I can navigate to the webpage, and see that it connects to the server backend and mongodb successful. I can then login and make other request (as long as each request is made within 10-15 seconds of that last). If however a request to the "server" is made more than 15 seconds apart the lambda times out. The lambda timeout is because the |
Sorry it took us a while to get to this issue, this is one of those issues that's unfortunately nearly impossible for us to debug without access to the code itself. Re "Sorry for my confusion, but do I not want to wait until the connection succeeds to set cachedDb?", yes that is correct. Otherwise other lambdas won't know that there's already a connection in progress. This issue affects Mongoose 5.9 and earlier, in 5.10.0 we made multiple parallel calls to The best I can tell, it looks like #9179 and 9a23c42 is the most likely culprit because in v5.9.24 we made I'd also recommend upgrading to Mongoose 5.10.13 to see if the fix for #9179 helps. In #9179, we found an issue with the MongoDB driver where some of the MongoDB driver's timeout logic gets confused when the Lambda goes to sleep and then wakes up a few minutes later. You can confirm whether this fix would actually help you by reducing the const options = {
user: process.env.DB_USER,
pass: process.env.DB_PASS,
useNewUrlParser: true,
useFindAndModify: false, // use findAndUpdate instead of modify
useCreateIndex: true, // uses createIndex over ensureIndex
autoIndex: !IS_PROD, // to create the indexes layed out in model files on start up
poolSize: 10, // Maintain up to 10 socket connections
connectTimeoutMS: 5000, // Give up initial connection after 5 seconds
socketTimeoutMS: 33000, // Close sockets after 33 seconds of inactivity
useUnifiedTopology: true,
// Buffering means mongoose will queue up operations if it gets
// disconnected from MongoDB and send them when it reconnects.
// With serverless, better to fail fast if not connected.
bufferCommands: false, // Disable mongoose buffering
bufferMaxEntries: 0, // and MongoDB driver buffering
heartbeatFrequencyMS: 2000 // <-- may help if you're experiencing the same issue as #9179
} |
Before creating an issue please make sure you are using the latest version of mongoose
I am. I have track it down to the change from 5.9.23 to 5.9.24 possible #9179 or #9218 as they both deal with connections.
Do you want to request a feature or report a bug?
Some sort of bug.
What is the current behavior?
When I navigate to the application and login everything works fine. Meaning Mongoose connects to Atlas and response with successful login and user profile. However it a wait about 10 seconds, and refresh the page to reload the user profile the lambda hangs for the 6 second timeout and fails. But if I make a request in under 10 seconds things work fine. Also after the lambda fails if I just refresh again things work as normal, but with a new lambda. I am so very lost on what I am doing wrong with my setup the one of these bug fixes broke my app.
If the current behavior is a bug, please provide the steps to reproduce.
The important parts of my setup are as follows.
A sample model setup.
index.js file where I setup connection and serverless apollo
What is the expected behavior?
What are the versions of Node.js, Mongoose and MongoDB you are using? Note that "latest" is not a version.
version 5.9.23 works but anythings above 5.9.24 does not
I have tried many different setup as well like using an async lambda handler. But still I get the same result as stated above. the app works BUT if a lambda sits for about 10 seconds then there is a failure. I first thought it was due to updateding serverless or the lambda httpapi to payload 2.0 but after 2 days I have finally narrowed it down to 5.9.23 -> 5.9.24. I just do not understand what I have to change with my setup to comply with the changes made to mongoose to get my app back to a working condition.
The text was updated successfully, but these errors were encountered: