You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some of the test-data csvs were hand crafted, others are (partial) snapshots of real data from a certain point in time. Hard to tell what special cases either type covers, especially for the later. The snapshots of real data can also be pretty hard to update with new columns/special cases.
It'd take a fair bit of work, but I think we need to start from scratch with totally fake, hand crafted data. Starting from making a variety of fake departments representing various real-world categories, including any special cases we can think of. Where possible, text fields in the fake data should be used to self-document what special cases something represents.
This will mean a hard reset on all the snapshot test baselines. We'll need to be pretty careful to make sure those don't degrade (e.g. lose coverage of some special case). Should result in higher quality and more maintainable snapshot tests at the end of the day though. They should run a good deal faster if we keep things reasonable. For instance, the test indicators.csv is currently nearly 6K lines and the related snapshot file is over 10K. That whole snapshot file needs to be diffed if even a single line in the test output doesn't match it!
Alternatively, rethink our /server testing approach all together.
The text was updated successfully, but these errors were encountered:
Some of the test-data csvs were hand crafted, others are (partial) snapshots of real data from a certain point in time. Hard to tell what special cases either type covers, especially for the later. The snapshots of real data can also be pretty hard to update with new columns/special cases.
It'd take a fair bit of work, but I think we need to start from scratch with totally fake, hand crafted data. Starting from making a variety of fake departments representing various real-world categories, including any special cases we can think of. Where possible, text fields in the fake data should be used to self-document what special cases something represents.
This will mean a hard reset on all the snapshot test baselines. We'll need to be pretty careful to make sure those don't degrade (e.g. lose coverage of some special case). Should result in higher quality and more maintainable snapshot tests at the end of the day though. They should run a good deal faster if we keep things reasonable. For instance, the test indicators.csv is currently nearly 6K lines and the related snapshot file is over 10K. That whole snapshot file needs to be diffed if even a single line in the test output doesn't match it!
Alternatively, rethink our
/server
testing approach all together.The text was updated successfully, but these errors were encountered: