You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Testing classes have several "standardize_x()" functions. These use regex to replace characters to the most common formats.
It made sense to have a majority of the "symbol" types, as I've had personal experience with Django templates outputting unexpected formats, which make tests fail when technically the character I expect is being output to the page.
Note sure if it makes sense to have standardization for the "number" and "letter" types, but they were added for completeness.
Anyways, to accomplish this, the functions are running many re.sub() replacement calls (one for each character type it can replace), which in theory may (potentially?) take up a non-negligible number of computations when analyzing large pages. And thus it could theoretically cause tests to slowdown and execute slower than they should.
But this is entirely untested at the moment. I have no idea how efficient Python's re.sub() calls are. And the matches are fairly simple so the hope is that the cost to run them is almost negligible. Needs testing at some point regardless, to check just how time-expensive these are to run. Preferably test on both very small pages and very large pages.
The text was updated successfully, but these errors were encountered:
Note: The expectation is that these (probably) don't take that much time to run, even on larger pages. But in big projects that have potentially thousands and thousands of tests, even a few seconds can add up. These functions are expected to be used very frequently so time-cost is worth considering.
Testing classes have several "standardize_x()" functions. These use regex to replace characters to the most common formats.
It made sense to have a majority of the "symbol" types, as I've had personal experience with Django templates outputting unexpected formats, which make tests fail when technically the character I expect is being output to the page.
Note sure if it makes sense to have standardization for the "number" and "letter" types, but they were added for completeness.
Anyways, to accomplish this, the functions are running many re.sub() replacement calls (one for each character type it can replace), which in theory may (potentially?) take up a non-negligible number of computations when analyzing large pages. And thus it could theoretically cause tests to slowdown and execute slower than they should.
But this is entirely untested at the moment. I have no idea how efficient Python's re.sub() calls are. And the matches are fairly simple so the hope is that the cost to run them is almost negligible. Needs testing at some point regardless, to check just how time-expensive these are to run. Preferably test on both very small pages and very large pages.
The text was updated successfully, but these errors were encountered: