Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include caveats for updating functional testcases in DG #1937

Closed
tlylt opened this issue May 23, 2022 · 23 comments · Fixed by #1993
Closed

Include caveats for updating functional testcases in DG #1937

tlylt opened this issue May 23, 2022 · 23 comments · Fixed by #1993

Comments

@tlylt
Copy link
Contributor

tlylt commented May 23, 2022

Please confirm that you have searched existing issues in the repo

Yes, I have searched the existing issues

Any related issues?

#1130

What is the area that this feature belongs to?

Documentation

Is your feature request related to a problem? Please describe.

Due to issue #1130, there might be unrelated changes to png files (generated by plantuml) when updating functional test cases. Before the issue is resolved, perhaps it will be good to mention the need to ignore .png files in our developer guide so newcomers will know how to update functional tests properly.

Describe the solution you'd like

Put in a note somewhere here: https://markbind-master.netlify.app/devguide/workflow#updating-and-writing-tests

Describe alternatives you've considered

Manual reminder, e.g #1652 (comment)

Additional context

No response

@ang-zeyu
Copy link
Contributor

Perhaps to unblock that issue we can resort to the following:

  • add the puml outputs to gitignore / untrack them
  • manually check, in our test scripts for these files (should be easy to maintain the list since just 9/10 files)

@tlylt
Copy link
Contributor Author

tlylt commented May 25, 2022

  • add the puml outputs to gitignore / untrack them

Just checking, this will make updating the puml outputs difficult/lose the ability to ensure that the puml outputs are as expected ? (When the output is a legitimate change)

manually check, in our test scripts for these files (should be easy to maintain the list since just 9/10 files)

May I ask what do you mean by manually check (but also in "test scripts" ??)

@ang-zeyu
Copy link
Contributor

Something like this, the ability can be moved to our test scripts:

const pumlFiles = [ ... ] // maintain this manually
pumlFiles.forEach(() => {
  assert( ... exists ...)
})

@tlylt
Copy link
Contributor Author

tlylt commented May 25, 2022

Something like this, the ability can be moved to our test scripts:

const pumlFiles = [ ... ] // maintain this manually
pumlFiles.forEach(() => {
  assert( ... exists ...)
})

I assume you mean:

  1. Delete the puml files
  2. Add them to gitignore
  3. Add assertions to check for existence in test scripts

I think this works fine except for the case whereby someone who has never seen the generated puml files (perhaps he just cloned the repo...actually also applies to any dev but more serious for newcomers) and work on some features which affected the puml files. He then runs the tests and sees that the tests passed, yet

  • the puml files are generated but could be erroneous, but he is not aware and won't be checking the content to find out
  • the puml files look "different" (which may be undesirable), but he is not aware because he does not have a reference to the previous version even if he manually inspects the puml files

@ang-zeyu
Copy link
Contributor

I assume you mean:

yup

the puml files are generated but could be erroneous, but he is not aware and won't be checking the content to find out
the puml files look "different" (which may be undesirable), but he is not aware because he does not have a reference to the previous version even if he manually inspects the puml files

Hmm I don't see how this would be any different, we are only checking for existence of files at the current too. (via expected - generated folder difference)

the puml files look "different" (which may be undesirable), but he is not aware because he does not have a reference to the previous version even if he manually inspects the puml files

I assume this is referring to individual environment settings. (e.g. resolution, os) This is unavoidable, even in a user setting. We assume the user (author) has their environment set up properly.

As for a developer, they are bound to notice at some point if their puml environment is functioning incorrectly. (even then not a huge issue as CI environment builds the images and runs the tests, and a different image is still an image (file existence) =P)

Slightly unrelated (could be a "different" (error) image): There's also the issue of error handling discussed previously #1245 (comment), #1903, which should be implemented in an automated way (developer shouldn't have to manually check output files).

@tlylt
Copy link
Contributor Author

tlylt commented May 25, 2022

the puml files are generated but could be erroneous, but he is not aware and won't be checking the content to find out
the puml files look "different" (which may be undesirable), but he is not aware because he does not have a reference to the previous version even if he manually inspects the puml files

Hmm I don't see how this would be any different, we are only checking for existence of files at the current too. (via expected - generated folder difference)

That's right for running the tests. I'm referring to when we update the tests, which I believe we should verify the content of the generated files is indeed as intended. For the suggested approach this does not hold.

@ang-zeyu
Copy link
Contributor

I believe we should verify the content of the generated files is indeed as intended. For the suggested approach this does not hold.

I believe no need, as mentioned there are only 2 scenarios the content is "different":

  • a puml error (syntax, etc.)
  • user / developer environment has issues

The first case we should fix via proper error handling / propagation (s.t. it is also detectable via tests, if we want to go further to test the puml executable behaviour).

I don't think we've ever mandated checking how the puml looks too https://markbind.org/devdocs/devGuide/workflow.html. Keep in mind its also ultimately the CI environment doing the "final litmus check" (where it would be impossible to inspect visually), not developers.

@ang-zeyu
Copy link
Contributor

If doing some puml-specific updating of the tests (e.g. adding a new <puml>), checking that is also of course still a should. (and still possible with this change)

@tlylt
Copy link
Contributor Author

tlylt commented May 25, 2022

I don't think we've ever mandated checking how the puml looks too markbind.org/devdocs/devGuide/workflow.html. Keep in mind its also ultimately the CI environment doing the "final litmus check" (where it would be impossible to inspect visually), not developers.

https://markbind.org/devdocs/devGuide/workflow.html#updating-and-writing-tests

You should always check that the generated output is correct before committing any changes to the test sites.

It's certainly not a "mandate", but given that the updatetest function overrides the generated files in such a way that it's dangerous to be careless, erring on the side of caution is likely to be beneficial?

@ang-zeyu
Copy link
Contributor

I see, thanks for pointing that page out. Yes, "mandate" is the wrong word -- more so "advise".

it's dangerous to be careless, erring on the side of caution is likely to be beneficial?

But I still don't see how this aspect would be any different, that warning would preclude the puml files.

Note again we aim to test for file existence, not how the puml looks. (assume the puml team is doing its thing correctly)
Doing that in a sure manner requires some sort of CV technique which really seems out of scope. Manual inspection is still possible ⬇️

If doing some puml-specific updating of the tests (e.g. adding a new ), checking that is also of course still a should. (and still possible with this change)

@ang-zeyu
Copy link
Contributor

Think of it this way: we are just moving the vcs (git) to the pumlFiles array here, no change otherwise.

const pumlFiles = [ ... ] // maintain this manually
pumlFiles.forEach(() => {
  assert( ... exists ...)
})

@tlylt
Copy link
Contributor Author

tlylt commented Jun 17, 2022

Think of it this way: we are just moving the vcs (git) to the pumlFiles array here, no change otherwise.

Doing this will mean that we have to update our test script's logic to deal with the difference in the number of files in a different manner? CMIIW currently we ensure that the number of generated files matches the expected site's number.

There are quite a number of font files that are also modified during a updatetest process, do you happen to experience that as well? Do you think we should do the same for those files?

@ang-zeyu
Copy link
Contributor

ang-zeyu commented Jun 18, 2022

Doing this will mean that we have to update our test script's logic to deal with the difference in the number of files in a different manner? CMIIW currently we ensure that the number of generated files matches the expected site's number.

yes, I think so. We can do a simple addition:

  const expectedPaths = walkSync(expectedDirectory, { directories: false });
  const actualPaths = walkSync(actualDirectory, { directories: false });

  let error = false;
  if ((expectedPaths.length + pumlFiles.length) !== actualPaths.length) {
    throw new Error('Unequal number of files');
  }

There are quite a number of font files that are also modified during a updatetest process, do you happen to experience that as well? Do you think we should do the same for those files?

Ahh... I thought this was just me as well. This didn't used to occur right? Based on the package-lock.json changes it seems like this was updated here #1934 accidentally without the corresponding test file updates.

Yes, perhaps we should really start exploring comparing binary file contents (or at least, some of them, not the problematic ones like puml) in our test scripts too. Puml tests will just be for file presence.

-I've also just bumped the package-lock, the changes should be gone

@tlylt
Copy link
Contributor Author

tlylt commented Jun 18, 2022

  const expectedPaths = walkSync(expectedDirectory, { directories: false });
  const actualPaths = walkSync(actualDirectory, { directories: false });

  let error = false;
  if ((expectedPaths.length + pumlFiles.length) !== actualPaths.length) {
    throw new Error('Unequal number of files');
  }

Sounds good.

Ahh... I thought this was just me as well. This didn't used to occur right? Based on the package-lock.json changes it seems like this was updated here #1934 accidentally without the corresponding test file updates.
-I've also just bumped the package-lock, the changes should be gone

I think it has been like that for quite a while and I typically just undo the changes to those font files. The package-lock update seems to have fixed it now, thanks!

@tlylt
Copy link
Contributor Author

tlylt commented Aug 1, 2022

Hi @ang-zeyu, I tried the above mentioned method to account for file number difference and realized that even if that works, the existing compare logic may not be able to accommodate this sort of file ignores. Specifically the following will break:

  for (let i = 0; i < expectedPaths.length; i += 1) {
    const expectedFilePath = expectedPaths[i];
    const actualFilePath = actualPaths[i];
	if (expectedFilePath !== actualFilePath) {
      throw new Error('Different files built');
   }

My proposed solution is to move the puml files into a separate, new test_site such that we can deal with this particular case. The con is that we have an additional site but the pro is that the existing compare logic can remain untouched. What do you think?

(Edit)
I think it could be possible to make the existing compare logic work, by changing the for loop to iterating via two pointers for each file paths

@tlylt
Copy link
Contributor Author

tlylt commented Aug 3, 2022

There are quite a number of font files that are also modified during a updatetest process, do you happen to experience that as well? Do you think we should do the same for those files?

Ahh... I thought this was just me as well. This didn't used to occur right? Based on the package-lock.json changes it seems like this was updated here #1934 accidentally without the corresponding test file updates.

Yes, perhaps we should really start exploring comparing binary file contents (or at least, some of them, not the problematic ones like puml) in our test scripts too. Puml tests will just be for file presence.

-I've also just bumped the package-lock, the changes should be gone

Hi @ang-zeyu, I just tried doing a dependency update and seems like once package-lock is updated, the font file changes will be picked up again after npm run updatetest. Actually when you bump the package-lock, what did you do to ensure that font-files are no longer changed?

@ang-zeyu
Copy link
Contributor

ang-zeyu commented Aug 6, 2022

  for (let i = 0; i < expectedPaths.length; i += 1) {
    const expectedFilePath = expectedPaths[i];
    const actualFilePath = actualPaths[i];
	if (expectedFilePath !== actualFilePath) {
      throw new Error('Different files built');
   }

I'm thinking just modify it like so, would that work?

// need a more robust includes check though (path api)
actualPaths = actualPaths.filter((p) => !pumlFiles.includes(p))

new test_site

Let's avoid if possible. Our git used to be even faster, its starting to show its "age" a little. Some of the existing test sites can probably be combined into the main one as well.

-I've also just bumped the package-lock, the changes should be gone

Hi @ang-zeyu, I just tried doing a dependency update and seems like once package-lock is updated, the font file changes will be picked up again after npm run updatetest. Actually when you bump the package-lock, what did you do to ensure that font-files are no longer changed?

We've been comitting the font files as well (e.g. 036ff81). It's expected that there will be font file changes since the dependencies have been bumped. Not sure if this is what you meant 👀

@tlylt
Copy link
Contributor Author

tlylt commented Aug 6, 2022

I'm thinking just modify it like so, would that work?

// need a more robust includes check though (path api)
actualPaths = actualPaths.filter((p) => !pumlFiles.includes(p))

Sure will try this approach then.

We've been comitting the font files as well (e.g. 036ff81). It's expected that there will be font file changes since the dependencies have been bumped. Not sure if this is what you meant 👀

So after dependency update (e.g. doing something like lerna add), we should also run npm run updatetest and commit the font file changes? Wasn't aware that this is necessary, if so we should probably document it?

@ang-zeyu
Copy link
Contributor

ang-zeyu commented Aug 6, 2022

new test_site

A better reason might be to make diffs easier to read and PRs easier to review.

So after dependency update (e.g. doing something like lerna add), we should also run npm run updatetest and commit the font file changes? Wasn't aware that this is necessary, if so we should probably document it?

Not just dependency updates. Anything that potentially changes the generated output (probably everything except documentation updates, devops stuff).

Yes, we can rewrite this section a little to make the thought process clearer.

Yes, perhaps we should really start exploring comparing binary file contents (or at least, some of them, not the problematic ones like puml) in our test scripts too. Puml tests will just be for file presence.

And this, which should probably be fixed regardless of the documentation / is the root of the issue.

@tlylt
Copy link
Contributor Author

tlylt commented Aug 6, 2022

Not just dependency updates. Anything that potentially changes the generated output (probably everything except documentation updates, devops stuff).

I was only doing a version update for one package (bumping lerna to v4), I doubt that will affect the output? This is why I was quite puzzled as to why the font files need to be committed again.

Yes, perhaps we should really start exploring comparing binary file contents (or at least, some of them, not the problematic ones like puml) in our test scripts too. Puml tests will just be for file presence.

And this, which should probably be fixed regardless of the documentation / is the root of the issue.

I think we can transit/tap onto existing snapshot testing frameworks instead of rolling this out on our own :)

@ang-zeyu
Copy link
Contributor

ang-zeyu commented Aug 7, 2022

I was only doing a version update for one package (bumping lerna to v4), I doubt that will affect the output?

Likely a transitive dependency somewhere down the line if this affected the font files. (unless you mean this was the one caused by #1934)

I think we can transit/tap onto existing snapshot testing frameworks instead of rolling this out on our own :)

👍, certainly something we can explore.

@tlylt
Copy link
Contributor Author

tlylt commented Aug 7, 2022

Likely a transitive dependency somewhere down the line if this affected the font files.

Hmm maybe... might need more experimentation.

(unless you mean this was the one caused by #1934)

I mean I am not sure why but sometimes after running npm run updatetest and the font files just start showing up as uncommitted changes. So for example in the case mentioned above, I updated the lerna package in one branch, I probably tried some other commands. Then when I am back to the master branch or a new branch, and if I run npm run updatetest, I get the font files changes again.

@ang-zeyu
Copy link
Contributor

Some of the fonts need to be updated together with the dependency (Bumping font-awesome etc). Not sure about the rest but the fonts on my end don't really change during updatetest (Happened only a few times as far as I can remember). If this is a rare issue, we can do the manual reminder method as it seems more complicated to maintain a long list of ignored fonts.

Fwiw, I do not encounter this as much recently. (used to) I think likely as we had some long periods where the font files in the main branch were supposed to be updated (but were not)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants