feat: Define logger.warning function#124
Conversation
I have been thinking on how logger.error gets in the way of testing. I believe failures should be as noisy as they can be, and also that they should be tested. When it comes to test a function failure that has the side effect of calling logger.error what we really want to observe is some kind of error handling (i.e. the function forces the lambda to return a form of HTTP 4xx). In production usage, we might want to trigger logger.error. This is even more true when the lambda is triggered by SQS. The reason is that we want to log the error to Epsagon while not necessarily fail the function (especially if we are processing several messages in one call). Enter feature tests. The behaviour above is all good for production, but when running integration tests it spams Epsagon with false positives. Staging functions are full of errors that are triggered on purpose by feature tests (checking that things fail nicely) . This is noise that don't help us on keeping track of real errors that might slip through our attention. My proposal is to add a logger.warning function that checks the environment (process.env.DEPLOY_ENV) and calls logger.error in production and logger.info otherwise - This is easier to manage (we don't need to deal with SQS or downstream failures) but introduces a difference between staging and production. See: - [ENG-125]
|
|
||
| import DependencyInjection from '../../../src/DependencyInjection/DependencyInjection.class'; | ||
| import LoggerService from '../../../src/Service/Logger.service'; | ||
| import LoggerService, { LOGGING_LEVELS } from '../../../src/Service/Logger.service'; |
There was a problem hiding this comment.
Does LOGGING_LEVELS still exist?
| * @param error | ||
| */ | ||
| warning(error) { | ||
| if (process.env.LOGGER_SOFT_WARNING) { |
There was a problem hiding this comment.
Guessing if we set LOGGER_SOFT_WARNING=false in the .env or wherever, that would be truthy?
Is it worth requiring it to to equal 'true' or something, so no one will ever be tripped up by this? Or maybe not worth worrying about?
There was a problem hiding this comment.
Good catch. We can check Boolean(process.env.LOGGER_SOFT_WARNING), this would work with true, false, 1 and 0
There was a problem hiding this comment.
Boolean("0") === true, so I was wrong in that
There was a problem hiding this comment.
how about ["true", "1"].includes(process.env.LOGGER_SOFT_WARNING)?
|
🎉 This PR is included in version 1.3.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
I have been thinking on how logger.error gets in the way of testing. I believe failures should be as noisy as they can be, and also that they should be tested.
When it comes to test a function failure that has the side effect of calling logger.error what we really want to observe is some kind of error handling (i.e. the function forces the lambda to return a form of HTTP 4xx). In production usage, we might want to trigger logger.error. This is even more true when the lambda is triggered by SQS. The reason is that we want to log the error to Epsagon while not necessarily fail the function (especially if we are processing several messages in one call).
Enter feature tests. The behaviour above is all good for production, but when running integration tests it spams Epsagon with false positives. Staging functions are full of errors that are triggered on purpose by feature tests (checking that things fail nicely) . This is noise that don't help us on keeping track of real errors that might slip through our attention.
My proposal is to add a logger.warning function that checks the
environment (process.env.DEPLOY_ENV) and calls logger.error in
production and logger.info otherwise - This is easier to manage (we
don't need to deal with SQS or downstream failures) but introduces a
difference between staging and production.
See: