You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #2841 the question came up whether supporting array validation should be enabled by default, and I was asked to open a new issue to discuss the breaking change.
My naive reasoning was:
In cases that don't use type: 'array' in the schema there shouldn't be an impact on either behavior or performance, because nothing gets coerced as array anyway.
As soon as arrays are involved, users probably would want to have it enabled to make it actually do what is intuitively expected. In particular in query string validation, arrays do not work by default without array coercing as discussed in in Array support in querystring validation聽#2841.
Even for body validation where arrays do work already there shouldn't be a performance difference: The validation has to run Array.isArray anyway, and no further coercion is required in these cases where the object already is an array.
The question is if this would be a semver-major change. As far as I can see the change would strictly increase the set of inputs that pass validation, i.e., everything that has passed validation before still passes validation, but there is now more input that fits to the schema, because scalars / missing values could be coerced to 1-element or 0-element arrays. In the case of query string validation this is certainly desired, because otherwise passing-in arrays isn't working (easily / by default). In the case of request body validation it is a question of API strictness whether one considers it a breaking change. Looks like a classical problem of semantic versioning: To 90% of the users a scope broadening change like that isn't breaking and may be welcome, but there may be a minority relying on it.
My org has been making the configuration change as needed on our internal schema container for a long time. I think that "as needed" is key. I genuinely can't think of any case where it would have broken us until we determined we need it, but I also don't know every scenario. However, I do think it is a narrower case than the proposed change. So I am 馃憤 for v4.
馃殌 Feature Proposal
In #2841 the question came up whether supporting array validation should be enabled by default, and I was asked to open a new issue to discuss the breaking change.
My naive reasoning was:
type: 'array'
in the schema there shouldn't be an impact on either behavior or performance, because nothing gets coerced as array anyway.Array.isArray
anyway, and no further coercion is required in these cases where the object already is an array.The question is if this would be a semver-major change. As far as I can see the change would strictly increase the set of inputs that pass validation, i.e., everything that has passed validation before still passes validation, but there is now more input that fits to the schema, because scalars / missing values could be coerced to 1-element or 0-element arrays. In the case of query string validation this is certainly desired, because otherwise passing-in arrays isn't working (easily / by default). In the case of request body validation it is a question of API strictness whether one considers it a breaking change. Looks like a classical problem of semantic versioning: To 90% of the users a scope broadening change like that isn't breaking and may be welcome, but there may be a minority relying on it.
Motivation
See #2841.
The text was updated successfully, but these errors were encountered: