-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include caveats for updating functional testcases in DG #1937
Comments
Perhaps to unblock that issue we can resort to the following:
|
Just checking, this will make updating the puml outputs difficult/lose the ability to ensure that the puml outputs are as expected ? (When the output is a legitimate change)
May I ask what do you mean by manually check (but also in "test scripts" ??) |
Something like this, the ability can be moved to our test scripts:
|
I assume you mean:
I think this works fine except for the case whereby someone who has never seen the generated puml files (perhaps he just cloned the repo...actually also applies to any dev but more serious for newcomers) and work on some features which affected the puml files. He then runs the tests and sees that the tests passed, yet
|
yup
Hmm I don't see how this would be any different, we are only checking for existence of files at the current too. (via expected - generated folder difference)
I assume this is referring to individual environment settings. (e.g. resolution, os) This is unavoidable, even in a user setting. We assume the user (author) has their environment set up properly. As for a developer, they are bound to notice at some point if their puml environment is functioning incorrectly. (even then not a huge issue as CI environment builds the images and runs the tests, and a different image is still an image (file existence) =P) Slightly unrelated (could be a "different" (error) image): There's also the issue of error handling discussed previously #1245 (comment), #1903, which should be implemented in an automated way (developer shouldn't have to manually check output files). |
That's right for running the tests. I'm referring to when we update the tests, which I believe we should verify the content of the generated files is indeed as intended. For the suggested approach this does not hold. |
I believe no need, as mentioned there are only 2 scenarios the content is "different":
The first case we should fix via proper error handling / propagation (s.t. it is also detectable via tests, if we want to go further to test the puml executable behaviour). I don't think we've ever mandated checking how the puml looks too https://markbind.org/devdocs/devGuide/workflow.html. Keep in mind its also ultimately the CI environment doing the "final litmus check" (where it would be impossible to inspect visually), not developers. |
If doing some puml-specific updating of the tests (e.g. adding a new |
https://markbind.org/devdocs/devGuide/workflow.html#updating-and-writing-tests
It's certainly not a "mandate", but given that the updatetest function overrides the generated files in such a way that it's dangerous to be careless, erring on the side of caution is likely to be beneficial? |
I see, thanks for pointing that page out. Yes, "mandate" is the wrong word -- more so "advise".
But I still don't see how this aspect would be any different, that warning would preclude the puml files. Note again we aim to test for file existence, not how the puml looks. (assume the puml team is doing its thing correctly)
|
Think of it this way: we are just moving the vcs (git) to the
|
Doing this will mean that we have to update our test script's logic to deal with the difference in the number of files in a different manner? CMIIW currently we ensure that the number of generated files matches the expected site's number. There are quite a number of font files that are also modified during a |
yes, I think so. We can do a simple addition:
Ahh... I thought this was just me as well. This didn't used to occur right? Based on the Yes, perhaps we should really start exploring comparing binary file contents (or at least, some of them, not the problematic ones like puml) in our test scripts too. Puml tests will just be for file presence. -I've also just bumped the package-lock, the changes should be gone |
Sounds good.
I think it has been like that for quite a while and I typically just undo the changes to those font files. The package-lock update seems to have fixed it now, thanks! |
Hi @ang-zeyu, I tried the above mentioned method to account for file number difference and realized that even if that works, the existing compare logic may not be able to accommodate this sort of file ignores. Specifically the following will break: for (let i = 0; i < expectedPaths.length; i += 1) {
const expectedFilePath = expectedPaths[i];
const actualFilePath = actualPaths[i];
if (expectedFilePath !== actualFilePath) {
throw new Error('Different files built');
} My proposed solution is to move the puml files into a separate, new test_site such that we can deal with this particular case. The con is that we have an additional site but the pro is that the existing compare logic can remain untouched. What do you think? (Edit) |
Hi @ang-zeyu, I just tried doing a dependency update and seems like once |
I'm thinking just modify it like so, would that work? // need a more robust includes check though (path api)
actualPaths = actualPaths.filter((p) => !pumlFiles.includes(p))
Let's avoid if possible. Our git used to be even faster, its starting to show its "age" a little. Some of the existing test sites can probably be combined into the main one as well.
We've been comitting the font files as well (e.g. 036ff81). It's expected that there will be font file changes since the dependencies have been bumped. Not sure if this is what you meant 👀 |
Sure will try this approach then.
So after dependency update (e.g. doing something like lerna add), we should also run npm run updatetest and commit the font file changes? Wasn't aware that this is necessary, if so we should probably document it? |
A better reason might be to make diffs easier to read and PRs easier to review.
Not just dependency updates. Anything that potentially changes the generated output (probably everything except documentation updates, devops stuff). Yes, we can rewrite this section a little to make the thought process clearer.
And this, which should probably be fixed regardless of the documentation / is the root of the issue. |
I was only doing a version update for one package (bumping lerna to v4), I doubt that will affect the output? This is why I was quite puzzled as to why the font files need to be committed again.
I think we can transit/tap onto existing snapshot testing frameworks instead of rolling this out on our own :) |
Likely a transitive dependency somewhere down the line if this affected the font files. (unless you mean this was the one caused by #1934)
👍, certainly something we can explore. |
Hmm maybe... might need more experimentation.
I mean I am not sure why but sometimes after running |
Fwiw, I do not encounter this as much recently. (used to) I think likely as we had some long periods where the font files in the main branch were supposed to be updated (but were not) |
Please confirm that you have searched existing issues in the repo
Yes, I have searched the existing issues
Any related issues?
#1130
What is the area that this feature belongs to?
Documentation
Is your feature request related to a problem? Please describe.
Due to issue #1130, there might be unrelated changes to png files (generated by plantuml) when updating functional test cases. Before the issue is resolved, perhaps it will be good to mention the need to ignore .png files in our developer guide so newcomers will know how to update functional tests properly.
Describe the solution you'd like
Put in a note somewhere here: https://markbind-master.netlify.app/devguide/workflow#updating-and-writing-tests
Describe alternatives you've considered
Manual reminder, e.g #1652 (comment)
Additional context
No response
The text was updated successfully, but these errors were encountered: