-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests to Enhanced Cover Art Uploads #72
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some additional context w.r.t. the Tidal changes: The old implementation could not be tested properly as it always landed on the captcha page, likely due to the absence of browser headers. I experimented a bit to check which headers needed to be set through curl
, managed to get a working page, retried a couple times to double-check, and I got IP banned on tidal.com
after 3 of those requests. It's likely because I was requesting only the main page, and not properly doing what a browser would do. There doesn't seem to be a straightforward way to get an IP ban lifted, it seems to be either waiting or contacting support (who will probably say you'd have to wait as well).
In any case, using that strategy in the userscript is a very bad idea if Tidal IP bans so quickly, so now we're using the API with one of their own keys, which has been present in their JS for at least 3 years.
4fc719c
to
470e2b6
Compare
I've been doing some thinking on how to best approach the core (non-provider) code refactoring to improve testability, so I'll just summarise so I don't forget: The main challenge with testing the core code in its current state is that it's fairly heavily coupled to the UI. Since we're using JSX with nativejsx, that's near impossible to test. nativejsx does something completely different to the usual JSX implementations: In conventional JSX, JSX code is converted to a sort of object describing the DOM elements, which should later be fed into a There are two options here:
The problem with option 1 is that in order to test I then foresee the following changes to make this work:
W.r.t. that logging setup:
The |
Btw, the reason CI isn't running on these changes yet is because of the merge conflict. I'll resolve those conflicts once #68 is merged. Rest assured that all tests are passing (at least on my machine). |
Since it's rather boring to see all tests pass, I am just making a few of them fail deliberately while I am reviewing locally 😁. I think I have already reviewed most parts of this PR too (as soon as you have pushed new commits) while I am still busy with #68, but it's hard to see the progress before the other one is merged (GitHub does not track the "reviewed" status of files across PRs which are containing the same commits). It won't take long now until I am done with the other one and you can rebase this PR. |
* Use Tidal API instead of requesting the page, since Tidal is apparently very aggressive in IP banning on automated requests.
* Properly handle cases where release does not exist
* Only insinuate a bad Qobuz App ID on a 400 response, to prevent false positives on e.g. albums that don't exist.
* Improve interop between comments and types. E.g. "Digipack Outer Left" now maps to a cover with type "Other" and comment "Digipack Outer Left", instead of merely "Digipack" as the comment.
The only reason we were using a custom error class was to set a custom error message, which we can more easily do by just providing the message in the base `Error` constructor. See also #69 (comment)
These files don't contain any functionality, they merely specify userscript metadata for use in the build process.
Adding a jQuery dependency which can be injected into the tests so that we don't have to mock out jQuery altogether.
Although unlikely, it is possible that someone may want to implement their own cover art seeder for use with ECAU. These docs should help them get started. It's also a good reference for internal usage.
The full URL leads to vastly improved traceability over simply "Seeded from atisket".
I've pushed a couple of misc things that aren't directly related to testing, but would be nice to have in a next release. We should fix #79 before releasing though... |
This test will always fail if we run the tests in passthrough mode. It also doesn't really serve a purpose anymore, as many of our tests will fail if the adapter stops working anyway.
The release we were using previously went 404, so although the code is still correct, the test was failing.
For some reason they're not URL-encoding the images anymore. We'll handle both URL-encoded and non-encoded images in the tests for future-proofing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I did continuously review your code changes as you pushed them, otherwise this huge amount of tests would have given me a jestphobia for sure.
I did also run the tests locally, checking out the HAR files made me realize that I never hit the Windows maximum path length limitation before, which I finally had to disable.
There are only two minor suggestions, otherwise there is not much to add from my side.
Thank you also for experimenting with a generic approach to logging with multiple sinks, this is something many userscripts could benefit from. I think this is one of the things that could also be included in a potential shared (MusicBrainz) userscript tools package.
We were lacking some tags in the template for an `it.each` test, so one test case was never being executed. Also cleaning up the HTTP recordings for this test. Co-authored-by: David Kellner <52860029+kellnerd@users.noreply.github.com>
I gotta admit I was getting burnt out after a while too. This PR became much larger than what I initially expected.
Definitely. I'm sure there's like 500 different logging libraries for JS, but I didn't want to fatten up the built code even more by embedding 20KB of logging libraries 😅 The logger is fairly simplistic but it does the trick and for its current purposes it offers just the right amount of flexibility. I've gotten used to loguru's dead simple logging setup without boilerplate, so I tried to somewhat replicate that (but without a default sink, and it's a bit easier to set a log level in our logger). |
Couple of things to note:
I suppose we could add a more integration test-like workflow to CI to run pollyjs in passthrough mode so we're actually performing the network requests rather than always replaying, so that we can find out whether there are any upstream changes that need to be accounted for. Such tests should definitely not be run on every commit in a PR, because, among other things, they may take quite a long time. Perhaps instead we could instead only run them on pushes to
main
(i.e. after a PR merge) before deployment, and/or on a cron schedule (perhaps once a week or so).Drafting to check whether this works properly in CI.