Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reimplement livereload in a simpler and better way #2385

Merged
merged 36 commits into from May 25, 2021
Merged

Conversation

oprypin
Copy link
Contributor

@oprypin oprypin commented Apr 29, 2021

This discards the dependency on 'livereload' and 'tornado'. This implementation is based on stdlib http.server and is much smaller, while not being a downgrade in any way.

This fixes #2061: multiple file changes no longer cause the site to repeatedly rebuild.
It also makes reloads much more responsive; they're "instant" rather than being on a 1-second polling interval or such.

I also was careful to keep backwards compatibility, including providing logging messages that feel just like the old ones.

Each HTML page gets JavaScript injected into it (like before) but now, instead of connecting to WebSocket, it requests a long-polling endpoint, which blocks until a newer version ("epoch") of the site is available - then the page knows to reload itself. After a minute without events, the endpoint will just return the unchanged epoch (to avoid timeouts), then the page requests the same endpoint again. The "downtime" in between requests is not a problem because these are not events that need to be received in real time; the endpoint can instantly report that the latest version is newer than the one the page identified itself as.

The reason to replace WebSocket isn't that it's bad or something, just that stdlib doesn't have it. But long-polling works completely fine here too.


I am presenting this implementation here as part of the MkDocs repository itself. That is because this way people can try it easily, as well as review it easily, without external boilerplate. I also actually think that it would be good for MkDocs to have the flexibility provided by an inlined implementation. But if you think this should be made into a library, I'd be happy to provide that. In fact, I'll probably do that in any case. Just need a good name for it...

This discards the dependency on 'livereload' and 'tornado'. This implementation is based on stdlib http.server and is much smaller, while not being a downgrade in any way.

This fixes issue 2061: multiple file changes no longer cause the site to repeatedly rebuild.
It also makes reloads much more responsive; they're "instant" rather than being on a 1-second polling interval or such.

I also was careful to keep backwards compatibility, including providing logging messages that feel just like the old ones.

Each HTML page gets JavaScript injected into it (like before) but now, instead of connecting to WebSocket, it requests a long-polling endpoint, which blocks until a newer version ("epoch") of the site is available - then the page knows to reload itself. After a minute without events, the endpoint will just return the unchanged epoch (to avoid timeouts), then the page requests the same endpoint again. The "downtime" in between requests is not a problem because these are not events that need to be received in real time; the endpoint can instantly report that the latest version is newer than the one the page identified itself as.

The reason to replace WebSocket isn't that it's bad or something, just that stdlib doesn't have it. But long-polling works completely fine here too.
@waylan
Copy link
Member

waylan commented Apr 30, 2021

This is interesting. Still undecided how I feel about it. To be clear, it is better than what we have, at least in some respects.

On the one hand, I love the fact that we could drop a few dependencies (especially tornado, which is huge and way overkill for our needs). On the other hand, this adds to the maintenance burden of MkDocs. Yes, it could be broken out into a separate library, but it is so small and purpose built, it wouldn't make much of a standalone library and would likely only be used by MkDocs anyway. And that means we would probably need to maintain it ourselves. As a third party lib it would likely grow additional features which will only increase the maintenance burden. Whereas, by keeping it in-house, we can keep it simple and purpose-built. I keep going round and round like that. So moving on to other concerns for now...

Will this require any changes by third parties and/or users? Will plugins need to change in any way (especially those that make use of the on_serve event)? If so, how? What can be done to make any those changes graceful (warning in initial release, error on later release, etc)?

How well does this server handle multiple connections at once? A user could have 2 browser tabs/windows open to the same (or different) pages at the same time. Or, to cite the extreme case, some users have reported running a dev server in their local office and multiple employees are connected to the server simultaneously while making edits, etc. True, we don't officially support that use case, but I'd like to know what to expect.

I couldn't help but notice that the amount of test coverage of this patch is very low. I assume you are planning to add tests before this would get merged. Of course, if you wanted to wait for approval first, so as not to waste your time, I understand.

And now some personal observations... It had not occurred to me to use HTTPSimpleServer for this. I had created waylan/rheostatic some time ago with the idea that we would eventually use it as the server for MkDocs. That was before MkDocs' version 1.0 when directory URLs were terribly buggy. I wanted to switch to using extension-less URLs. I never followed through because I realized that that would change everyone's URLs (dropping the slash at the end) and make all old links invalid. That said, rheostatic was implemented as a wsgi app. I always imagined it could be wrapped in a livereload wsgi app. I was surprised I couldn't find one until I more recently realized that Python-Livereload can also wrap a wsgi app. Although, I suppose the lack of a definitive Python websocket lib is a factor as well. In any event, if you do make a standalone library, I have to wonder if it would get more traction as a wsgi app. Then any wsgi server could just use it as a wrapper. Or am I showing my age here? I remember when wsgi was the new hotness. Now it seems that the cool kids are using asyncio frameworks (ASGI) and the like.

@oprypin
Copy link
Contributor Author

oprypin commented Apr 30, 2021

And that means we would probably need to maintain it ourselves.

Well yes, but I'm not going anywhere, so maybe no problem there.

Will plugins need to change in any way

I have prevented breakages in ways that I could foresee.
The one usage by plugins that I know is mkdocstrings calling server.watch(path, builder) - that is confirmed to still work. But the function will now reject anything that's not the original builder. See https://github.com/mkdocs/mkdocs/pull/2385/files#diff-1d9feb257a726960a5811814b912c90718f0935d46a7504c4d47268ccd67bb50R49-R50

How well does this server handle multiple connections at once

Yeah it works totally fine (I tried). The server in stdlib is threaded for quite a while now.

I couldn't help but notice that the amount of test coverage of this patch is very low.

Yes, like you say, we'll think about testing after initial approval. For now the PR is minimal.

I wanted to switch to using extension-less URLs

Well it'd be a tough sell regardless, seeing as many production static servers wouldn't support that kind of layout.

I have to wonder if it would get more traction as a wsgi app. Then any wsgi server could just use it as a wrapper

It is indeed a good consideration, and I also noticed this. There is probably some way to hack WSGIRequestHandler into it, but I haven't tried it.

@waylan
Copy link
Member

waylan commented May 1, 2021

Thanks, that addresses my concerns.

Will plugins need to change in any way

I have prevented breakages in ways that I could foresee.
The one usage by plugins that I know is mkdocstrings calling server.watch(path, builder) - that is confirmed to still work. But the function will now reject anything that's not the original builder. See https://github.com/mkdocs/mkdocs/pull/2385/files#diff-1d9feb257a726960a5811814b912c90718f0935d46a7504c4d47268ccd67bb50R49-R50

Those two lines of code are exactly what prompted my question. My concern is that If an existing plugin is passing in something other than the original builder, when users upgrade they will suddenly get an error. Ideally, it should still work on the initial release, but issue a depreciation warning. Of course, a later release would raise an error. This will give plugin devs an opportunity to make the necessary changes and release an update.

I have to wonder if it would get more traction as a wsgi app. Then any wsgi server could just use it as a wrapper

It is indeed a good consideration, and I also noticed this. There is probably some way to hack WSGIRequestHandler into it, but I haven't tried it.

This is the one thing giving me pause. I wonder if this might be a better way to go.

@oprypin
Copy link
Contributor Author

oprypin commented May 2, 2021

There is probably some way to hack WSGIRequestHandler into it, but I haven't tried it.

This is the one thing giving me pause. I wonder if this might be a better way to go.

I would not recommend it as the main course of action. The main code (static file serving) is not WSGI-aware at all, but it's the main workhorse here. You also can't hook into it from a WSGI route (e.g. the code substituting a file object must be kept as is). And the only other code here is the single long-polling endpoint. Which could probably be done through a WSGI fallback, but why pull in that machinery? Anyway, the main reason is that there's no good way to make those two coexist. It's feasible to choose the type of handler at the very beginning via hardcoded paths, but I don't think the static file handler code is flexible enough to allow for a fallback handler (as in, try to serve a file, else go for a different handler). And, again, what are we doing this for? A single endpoint? Or, which other endpoints would you have in mind for this? Or, why do we need a generic server here?

@oprypin
Copy link
Contributor Author

oprypin commented May 2, 2021

My concern is that If an existing plugin is passing in something other than the original builder, when users upgrade they will suddenly get an error.

That could indeed happen, but should MkDocs really expose a generic implementation of watching arbitrary files and executing arbitrary actions? Is there any valid use case for that?

This rework actually started off as including this functionality, but I intentionally excluded it when adapting the code for this pull request. It really is a lot of code and also makes it more complicated to implement the throttling mentioned in the code. You need to consider that only one of the actions at a time can be allowed to run, and that the throttling mechanism may need to become duplicated per-handler, etc. A lot of edge cases. (and despite me saying that an earlier version already had this implemented, that implementation did not, in fact, handle edge cases well)


Anyway, I did a full search through all plugins on PyPI.

How I searched the code
page=1
while curl -s 'https://libraries.io/api/Pypi/mkdocs/dependents?api_key=REDACTED&per_page=100&page='"$page" | jq -r '.[].name' | grep ''; do ((page++)); done | \
xargs -n1 -P5 bash -c 'curl -s "https://pypi.org/pypi/$0/json" | jq -r ".releases[.info.version][] | select(.packagetype==\"sdist\") | .url" | xargs -t wget -c -q'
find . -name '*.tar.gz' -exec tar -xzf {} \;
grep -r -n on_serve */
  1. https://github.com/tkamenoko/inari/blob/adbf643829d885d939d74c7baeb10eef19482ba0/inari/mkdocs_plugin.py#L28
    breaks! looks like the implementation is watching some directory, on change writes files to docs_dir, and then presumably mkdocs itself rebuilds stuff. I think this can be changed to also use builder and still work. but probably a better implementation should be looked into anyway.
    cc @tkamenoko
  2. https://github.com/derJD/python-mkblog/blob/a76c126f9843059ede3f8908c78d826699ed32cf/src/mkblog/plugin.py#L193
    compatible
  3. https://github.com/orzih/mkdocs-extra-sass-plugin/blob/c11849098324d4e05641768d7864f8114abc2f69/mkdocs_extra_sass_plugin/plugin.py#L148
    compatible
  4. https://github.com/fralau/mkdocs_macros_plugin/blob/6bde2af230f58d6b488d1fea9a24ca182eca483c/mkdocs_macros/plugin.py#L604
    compatible
  5. https://github.com/backstage/mkdocs-monorepo-plugin/blob/77347ac1e347e4d8670c1578977e15761f89ce1a/mkdocs_monorepo_plugin/plugin.py#L75
    compatible if they switch to Pass builder to on_serve event. #2071
  6. https://github.com/athackst/mkdocs-simple-plugin/blob/13d061a3c647a1e4d6703fdc8de8bc97e19d06e0/mkdocs_simple_plugin/plugin.py#L322
    compatible
  7. https://github.com/wtc-cloud-eng/mkdocs-terraform-monorepo-plugin/blob/a6bb70638219ae358e66923640d958608196b082/mkdocs_terraform_monorepo_plugin/plugin.py#L87
    duplicate of 5
  8. https://github.com/mkdocstrings/mkdocstrings/blob/f9e1a5e8742b5d950fa6407675ee52969ac194d1/src/mkdocstrings/plugin.py#L131
    compatible
  9. https://github.com/orzih/mkdocs-with-pdf/blob/f2e78d8c8ecb505636ba31a4830646038a78accd/mkdocs_with_pdf/drivers/event_hook.py#L22
    compatible
  10. https://github.com/comwes/mkpdfs-mkdocs-plugin/blob/07c2164ab829656f1e08c7eebf4fe683011e9fa2/mkpdfs_mkdocs/mkpdfs.py#L31
    empty
  11. https://github.com/octadocs/octadocs/blob/38df6929c78fdc221f8839259697e8f9be31b4c8/octadocs/plugin.py#L202
    looks wrong in the first place, would probably be accidentally fixed by this change

"Compatible" here meaning that no code change is required. Generally it's viable to support all versions >=1.1.1 with a single code path.

@waylan
Copy link
Member

waylan commented May 3, 2021

My concern is that If an existing plugin is passing in something other than the original builder, when users upgrade they will suddenly get an error.

That could indeed happen, but should MkDocs really expose a generic implementation of watching arbitrary files and executing arbitrary actions? Is there any valid use case for that?

True, but that doesn't mean someone hasn't done it. After all it is possible with the current implementation. Therefore, as a curtesy to our users (and to plugin devs) we MUST provide a graceful deprecation. If that makes it harder to implement and/or reduces performance for a couple releases until we completely remove support, so be it. This is nonnegotiable.

@oprypin
Copy link
Contributor Author

oprypin commented May 3, 2021

I literally searched through all public plugins and they haven't done it

@oprypin
Copy link
Contributor Author

oprypin commented May 3, 2021

That is done now.

@waylan
Copy link
Member

waylan commented May 3, 2021

Where is the DecrecationWarning? You seemed to have been suggesting that supporting anything other than the builtin builder was a bad idea, and I agree. Therefore, we need to deprecate that through a couple releases. Initially, detect when it happens, issue a deprecation warning and continue as before. In the next minor release, change that deprecation warning to a deprecation error (an error explaining that the behavior is no longer supported), and then in the next release after that, we could remove the special error if we feel it is not necessary.

Or are you now of the opinion that we should just leave support in indefinitely?

@oprypin
Copy link
Contributor Author

oprypin commented May 3, 2021

Overall, I don't have a strong preference for or against deprecation.

Why I didn't make it deprecated:

  • That can be added anytime through the flow of this PR
  • I didn't get a suggestion of a specific way how to do it
    (now in your latest message I see it's DeprecationWarning - thanks).
  • I could not find any current example of anything being deprecated, and the examples that are there are specific to config options, which made me even more uncertain.
  • Such a deprecation is almost more annoying than a breakage -- in the end, it's the users who will be looking at that warning every time.
  • I'm not so sure anymore that there aren't valid usecases for this -- also upon thinking how exactly a plugin might use watchdog by itself, I found that it's actually really difficult, seeing as the plugin object gets re-created on every reload.
  • In the end I actually like the implementation.
  • It was 4 AM.

But yeah I should've mentioned that I didn't add the deprecation.

Still though, please consider what my feeling was upon the first response being "Where is the DecrecationWarning".
That on top of the overall impression of the state of this PR being the usual "I dunno, might throw away your work, might not, let's see" -- please take care to announce any changes to that state if they happen, at least to me that's really important.

@waylan
Copy link
Member

waylan commented May 6, 2021

Sorry if I wasn't clear before. Previously, the only reason a plugin could provide its own builder was a coincidence based on the fact that Python_livereload's watch method accepts a builder and we exposed that method to plugins. Of course, we never intended for plugins to supply their own builder, but we didn't prevent it either. Therefore, if there is no reasonable deprecation path, we should just "break" it. However, if we can reasonably provide a deprecation path, we should as a courtesy. Therefore, if we don't have the deprecation warning, then there is no point in including the change to allow plugins to provide their own builder. I hope that explains my reaction.

If you are looking for example of how to deprecate something, in versions 1.0.0, 0.17.0, and 0.16.0 we deprecated a number of things. Those things have mostly been removed now, but you may find some examples there (perhaps start with the release notes). You can also see how the deprecation progressed through multiple releases with the behavior changing slightly at each step along the way until any mention of it was completely removed. Maybe see #921 and #1026 for a few examples.

Sorry, we have never raised any DeprecationWarning in MkDocs (that's other projects I work on). We just log a warning with a sort, concise message. In a future release, we would change from logging a warning to raising an error with a custom message. And then, in a third release we might remove any detection of an attempt to pass a builder in and allow the underlying code to raise whatever error is would for passing in an unknown parameter. I say 'might' because we may determine that it is helpful to leave the custom error message in place, in which case the third step may never be taken.

@waylan
Copy link
Member

waylan commented May 6, 2021

In case is wasn't clear, I'm okay with using the stdlib http.server. I'm willing to accept this assuming the following is met:

  1. A solution is worked out for the deprecation of the builder in plugins.
  2. Full test coverage is provided for the code. I assume Mock objects will need to be used.
  3. A note is added to the release notes.

@oprypin
Copy link
Contributor Author

oprypin commented May 9, 2021

That is all done; this now also fixes MIME types as in issues #1790 / #1807 / #2373

@waylan
Copy link
Member

waylan commented May 14, 2021

So I actually tried this for the first time. Seems to work fine with one glaring issue: The server does not shut down immediately upon typing Ctrl+C. The server continues to run and continues to respond to new requests. However, anytime after having typed Ctrl+C at least once, upon saving a watched file, the server immediately shuts down. It seems that responding to shutdown is tied to the file watcher, which should not be the case. I experienced this behavior from within a bash environment on Windows 10/MINGW64. I have not (yet) tested any other environments.

In other matters, I'm okay with the current logging output:

INFO     -  Building documentation...
INFO     -  Cleaning site directory
INFO     -  Documentation built in 1.26 seconds
INFO     -  [11:49:57] Serving on http://127.0.0.1:8000/
INFO     -  [11:49:57] Detected file changes
INFO     -  Building documentation...
INFO     -  [11:49:58] Reloading browsers
INFO     -  [11:50:18] Browser connected: http://127.0.0.1:8000/
INFO     -  [11:50:50] Detected file changes
INFO     -  Building documentation...
INFO     -  [11:50:51] Reloading browsers
INFO     -  [11:50:51] Browser connected: http://127.0.0.1:8000/

It is clean, easy to read, and easy to distinguish between server logs and build logs. But I don't love it. Of course, it is a huge improvement from before and is certainly something we can live with. Baring any better suggestions, it should be okay to leave it as-is.

@oprypin
Copy link
Contributor Author

oprypin commented May 14, 2021

I can reproduce your finding on Windows.

It seems that responding to shutdown is tied to the file watcher, which should not be the case.

It's not tied in any direct way. The main hypothesis is that if the main thread is blocked on one instruction, it is impossible to receive KeyboardInterrupt specifically on Windows. That waiting instruction just happens to finish after a rebuild happens.

self._rebuild_cond.wait_for(lambda: self._to_rebuild or self._shutdown)

Pushed a solution.


I'm okay with the current logging output:

But I don't love it.

So why don't you say what's off about it?

@waylan
Copy link
Member

waylan commented May 14, 2021

That was weird. After updating I couldn't get the server to shutdown at all on the first run. Saves seemed to have no effect. Then, suddenly, after one of my saves, the next keyboard interrupt initiated a shutdown. I restarted and shutdown the server multiple times since and can't replicate the behavior. It now appears to be working fine.

There is an oddity I noticed though. If I have left my browser tab open after a server shutdown, upon restarting the server, it automatically reconnects to the browser, which is great. The strange bit is that the Browser Connected log message appears twice. I assume users would expect it to only be appearing once here? When opening a new browser window/tab after a fresh start or after a reload only one log message appears. Why two in this particular case?

I'm okay with the current logging output:

But I don't love it.

So why don't you say what's off about it?

I think what bothers me is that the time is in the middle. I would have expected it to be in the front (before INFO). But that wouldn't work with the format of the build messages. Unfortunately, I don't have any suggestions for improvement.

I see you are logging http responses as debug messages (available in verbose mode), which is probably more sensible. The one thing I find odd is that you never log the response for the HTML file. Sure we get the browser connected info message with the URL, but there is no debug message indicating the request method (GET), response code (200), etc. Strangely, we do get that message for a 404, so why not for a 200?

@oprypin
Copy link
Contributor Author

oprypin commented May 14, 2021

Browser Connected log message appears twice

It is not amazing but it's expected in the current "stateless" implementation.

First the old page connects, gets logged and gets a reload response, then the new page connects and also gets logged.

But it's actually possible to skip logging in case of an instant reload response (avoid the first of those two), just makes for slightly longer code. I pushed that now.


The one thing I find odd is that you never log the response for the HTML file

Requests themselves do get logged, intentionally only in verbose mode, which also matches the old implementation.

def log_request(self, code="-", size="-"):
level = logging.DEBUG if str(code) == "200" else logging.WARNING
log.log(level, f'"{self.requestline}" code {code}')

For the particular case of "browser connected" - first of all, the actual URL is /livereload/whatever (and the originating URL is obtained only through REFERER) and secondly, the response is being intentionally stalled, so for that there isn't even really a response.

m = re.fullmatch(r"/livereload/([0-9]+)/[0-9]+", path)
if m:
epoch = int(m[1])
self._log_poll_request(environ.get("HTTP_REFERER"), request_id=path)

So do you want to just log requests to HTML files in particular? Or fake the messages about these livereload connections to look more like actual responses?

@waylan
Copy link
Member

waylan commented May 14, 2021

The one thing I find odd is that you never log the response for the HTML file

Requests themselves do get logged, intentionally only in verbose mode, which also matches the old implementation.

Hmm, I see it now. Not sure why I wasn't seeing that before. Maybe because I expected it to the the second message, not the third, as it is below.

INFO     -  [14:32:53] Reloading browsers
DEBUG    -  [14:32:53] "GET /livereload/129874218/129874359 HTTP/1.1" code 200
DEBUG    -  [14:32:53] "GET /getting-started/ HTTP/1.1" code 200

In any event. No concern here. Sorry for the noise.

But it's actually possible to skip logging in case of an instant reload response (avoid the first of those two), just makes for slightly longer code. I pushed that now.

I like it. Less confusing to the user.

@oprypin
Copy link
Contributor Author

oprypin commented May 16, 2021

So in this particular pull request I have experienced very constructive interactions and I'm grateful for it.

However, such cases are exceptional, and it's untenable to contribute in an environment where the maintainer's attitude towards a contribution is set by his preconcieved beliefs, plus some random chance, and is not possible to affect with any factual arguments.

I don't intend to jeopardize this particular pull request in any way, but going forward, something will have to change.

I opened a discussion here: Concerns about maintainership of MkDocs
Or a fallback, if needed

@waylan
Copy link
Member

waylan commented May 18, 2021

I believe this is almost ready to go. Unless I'm missing something, it seems that we only need the following:

  1. The fix in "Fix" event detection right after starting an emitter on PyPy gorakhargosh/watchdog#796 needs to be included in the next release of watchdog (whatever comes after 2.1.1). Then we can specify that as the minimum required version.
  2. The conflict in the release-notes needs to be resolved. Of course, that is inevitable with an actively developed project, so we can address that as a last step. No point in doing so repeatedly.

One other thing is to decide whether we should include oprypin/mkdocs@b2ac728 (Offset the local site root according to the sub-path of the site_url ) in this PR or separately. I don't see any reason to not include it, but @oprypin seems to suggest is should be kept separate. This is his PR, so if he want's to keep that separate, he can. I only mention it here for completeness.

The big question is whether we should delay the release of MkDocs 1.2 for this issue or not. It is not currently in the milestone, which (as of this posting) only has two remaining open issues. I didn't add it to the milestone initially because I didn't know how it would progress. Of course, as the readiness of this PR is dependent on an upstream project (see item 1 above), we have no idea or control over when that will happen. My thinking is that we can tentatively add it to the milestone now and reexamine the situation once all remaining issues in the millstone are resolved. If need be, we can remove it from the milestone at that time and proceed with the release of 2.1 without this. Of course, it would be preferable to include this it we can.

@waylan waylan added this to the 1.2 milestone May 18, 2021
@oprypin
Copy link
Contributor Author

oprypin commented May 18, 2021

Thanks much for the analysis.

The fix in gorakhargosh/watchdog#796 needs to be included in the next release of watchdog (whatever comes after 2.1.1). Then we can specify that as the minimum required version.

Agreed to do that. And they have promised a release tomorrow, so that's nice.

The big question is whether we should delay the release of MkDocs 1.2 for this issue or not.

With that in mind, hopefully the wait is not significant. And we even have the option of not waiting, because (and I don't prefer this approach, but pointing out):
The failure is specific to PyPy, Windows, and is basically test-only (because starting to detect events 0.01sec too late is not a user-visible thing). So we can also consider not pinning that dependency so strictly and instead filter out the test.


I don't see any reason to not include it, but @oprypin seems to suggest is should be kept separate

Right -- I would just really like to see the initial commit of this server to be in its minimal state, and add-ons to be separate; besides, as-is, this is kinda a non-breaking change, while oprypin@b2ac728 is a kinda-breaking change. But there's no good way to ensure that these are merged as 2 commits while staying within one PR, and also no way to do dependent PRs if the branches aren't in the main repository.

I was still seeing random failures on Windows, in addition to the consistent ones on PyPy, so may as well add those sleeps everywhere.
@oprypin
Copy link
Contributor Author

oprypin commented May 19, 2021

I was looking into the fact that tests still occasionally fail randomly -- not on PyPy, despite the attention previously being brought to PyPy.
(Meanwhile, the new watchdog release is now there.)

I made GitHub Actions run this new test suite 50 times on each platform.
On Windows it failed 13 out of 200 times (total over all versions) and also a bit on Mac:
https://github.com/oprypin/mkdocs/actions/runs/857713565

So I added sleeps directly to the tests, which changed the failure rate to 0:
That is 0 out of 18(test cases) × 50(runs) × 15(platform combinations)
https://github.com/oprypin/mkdocs/actions/runs/857723657

So, seeing as we need those sleeps anyway, and that that's what the fix for PyPy is anyway, now we don't even need to pin the watchdog version.

Mind you, the sleeps sound like a pretty bad practice to add, but each of them is 10ms, those are not real waits (I guess it's basically something to ensure the execution yields to the other thread). And these tests are already slow due to watchdog's startup and shutdown taking time, so that's not a big deal.


In summary, I consider this fully ready now.

@ultrabug
Copy link
Member

@ktomk
Copy link
Contributor

ktomk commented May 23, 2021

In summary, I consider this fully ready now.

I'll give it a test. Before that: Would you be that kind and remove the merge commits from the PR please? (The feature in this form already incorporates other changes via merge.) And then rebase clean on top of the target branch (master)? That would be great @oprypin

@oprypin
Copy link
Contributor Author

oprypin commented May 23, 2021

The feature in this form already incorporates other changes via merge

That is not true. Comparing this branch to master shows no other changes.
master...oprypin:reload

Would you be that kind and remove the merge commits from the PR please?

Why? Removing or avoiding them was never the plan. And I don't think the fact that it has some slightly chaotic commits affects the final installed result. Certainly doesn't affect the direct pip install. But if you'd be more willing to only apply 1 patch on top of master, this is the patch: https://github.com/mkdocs/mkdocs/compare/master...oprypin:reload.diff
Force pushing to a pull request branch is not a good practice because reviewers no longer have the ability to see only what changed in the last added commits.

And then rebase clean on top of the target branch

I don't want to throw away the history of how the PR progressed. It already merges the latest of the target branch. In the end GitHub will just "rebase" the very final result itself.

@ktomk
Copy link
Contributor

ktomk commented May 23, 2021

@oprypin Thank you for your hard work putting this together and your recent comment, much appreciated.

I'm deeply sorry my asking caused that much of a comment and considerations, it was not meant that way. I kindly ask you to accept my apology.

It is and never has been a blocker for my test (which is going good so far for me), it was just an early comment. Which makes me specifically sorry for asking being the cause of so much further ado. It was merely a QC thing, VCS related, so just a technicality. I know this does not make it better and have to ask you to accept my apology nevertheless.

I don't want to throw away the history of how the PR progressed.

I'm glad you didn't as I would have not either. As written, I feel sorry I've asked as it caused that many considerations, I will refrain from further comments at least until resolved, my intend was to keep unrelated changes out of it to ease in my local review and even with best intend, absolutely selfish. My belief was, my really short comment would be ok to handle it with sovereignty and I have to find out for myself first what made me think so as I was obviously wrong and offended you.

I will step away from adding any further comments to this PR for now as I don't want to risk or derail it in any way. Be it entirely subjective feedback of mine or test results, even how I was able to test it, I fear the risk of negativity due to any kind of interaction does not justify the benefits it may have under circumstances I believed or expected and I owe to you to verify this first.

I'm deeply sorry and the only thing left for me is the hope you can accept my apology and we can build trust in each other.

@oprypin
Copy link
Contributor Author

oprypin commented May 23, 2021

@ktomk No problem at all, and thanks for your heartfelt message. I really just wanted to explain in detail what I often see as a misconception.
And it's probably true that local review is made more difficult by the numerous commits. There are always ways around it, of course, but I can't assume it's obvious. In this case I think git fetch origin && git diff origin/master...reload would show one diff covering everything relevant.

@ktomk
Copy link
Contributor

ktomk commented May 24, 2021

@oprypin thanks. this is taking a burden off my shoulders.

So if it would be OK for you to continue regardless of any misconceptions, it would be fine for me if we could leave it so.

And if you're interested why I do a rebase check, I'm open to talk with you about that as well, it's just I think it does not belong here.

Sounds fair to you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

livereload reloads for every file that changes
7 participants