-
Notifications
You must be signed in to change notification settings - Fork 26.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Send RPC request to switch assets directory on hot reload. (#12872)
* Send RPC request to switch assets directory on hot reload. This is needed to pick up updated assets that are expected to be picked up on hot reload. * Assert assets directory is not null. * Better multiple future wait * Add type annotation
- Loading branch information
Showing
3 changed files
with
20 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to flutter benchmarks this PR seems to have regressed hot_mode_dev_cycle__benchmark hotReloadVMReloadMilliseconds significantly (from ~25 ms to ~472 ms).
I've tried reproducing this regression locally(linux with android emulator), but could not. On this PR I get similar hotReloatVMReloadMilliseconds numbers:
@yjbanov , any ideas on how the numbers on the dashboard could be so different from what you get locally?
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't think of anything. One thing that gives this regression credibility is that it is happening across different devicelab agent profiles: Linux and Windows. So it seems unlikely that it's caused by something going on in the hardware.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hot_mode_dev_cycle_linux__benchmark and hot_mode_dev_cycle__preview_dart_2_benchmark don't look reasonable - they fluctuate wildly feeding concerns that something else is going on with benchmarking tool or hardware:
Is there some way to troubleshoot those? Run them manually on the testing infrastructure?
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've seen that kind of thing before, where once you go past a certain threshold of runtime / memory usage / number of items in some hash table / whatever, you cut into a different codepath and hit a performance cliff; when the codebase is right on the edge of that boundary, you then see this effect, where some runs are fast and some are slow. What hardware or OS you're testing on obviously can impact this in many ways. The regressions are still "real" in the sense that it's a problem in our code, usually, it's just that we're seeing the bimodal behaviour of our code at that boundary.
@yjbanov can hook you up with devices to run locally for trouble-shooting this. I would start by getting a device, since it's unlikely the host is the cause if it's happening across both Windows and Linux.
In the meantime we should probably revert the change so that we don't miss any other regressions.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure we can revert, but how can we re-land it if we are unable to reproduce the regression locally?
Other thing is that this hotReloadVMReloadMilliseconds benchmark is part of hotReloadMillisecondsToFrame benchmark, where essentially hotReloadMillisecondsToFrame=hotReloadDevFSSyncMilliseconds+hotReloadVMReloadMilliseconds
+hotReloadFlutterReassembleMilliseconds.
Looking at how hotReloadFlutterReassembleMilliseconds went down, while hotReloadVMReloadMilliseconds went up, so that total hotReloadMillisecondsToFrame regressed slightly(which is reasonable considering that we are making one more rpc call) points to some kind of issue with attribution of time spent to the right step:
![v1stgk4fsxg](https://user-images.githubusercontent.com/381137/34632360-cc5ace0a-f229-11e7-8432-c4eabe15b6be.png)
I also have #13934 which should only make this additional RPC call on initial reload/restart only.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We clearly need to be able to reproduce it, but that should definitely be possible since there's hardware physically present in our building that's reproducing it already.
Interesting, maybe there's a race condition in how we time this stuff?
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a couple of devices on the device rack you could grab from. I have one on my desk. If that doesn't help, we could just walk upstairs to the lab and try to reproduce on the lab hardware. I'm wondering if things like USB speed/latency can affect these numbers.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we running these benchmarks on G4? I would imagine we use different devices for hot_mode_dev_cycle_win__benchmark/hot_mode_dev_cycle__benchmark and hot_mode_dev_cycle_linux__benchmark since they show different behavior on this benchmark.
I was unsuccessfully trying to reproduce this on emulator and on Pixel.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use Moto G4 for everything.
8da5af5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reproduces on Moto G4. The variation in performance I would imagine is due to the fact that we post messages on UI thread to process RPC call on the device and message/queue processing is inherently unpredictable.
The RPC call is needed, so performance hit is expected. We don't need to make RPC call every time though, that is what #13934 addresses.
#13934 also "fixes" the benchmark regression because in our benchmarks hot reload happens after restart and with this PR RPC call won't be issued because restart have switched to running from sources.