-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local development setup, adjusting Closure compiler compilation level #89
Comments
Sorry that you had to go through painful debugging sessions to arrive to that conclusion. Few clarifications:
|
@stephanwlee No worries. This is super helpful, thanks! 👍 So the root issue is Relatedly, I'm curious how you all develop now - do you have much faster build times on newer laptops, or are you offloading these build tasks to a big speedy Bazel build server in the 🌩️ ? |
Bazel can do distributed building (https://docs.bazel.build/versions/2.0.0/remote-execution.html) but I don't think I use that. My desktop machine tends to have more core and added parallelism definitely helps. My build at the end is closer to 20s but I wish it was closer to <1s or at most 10s. There are other bundler strategies that we want to employ but those are more appropriate when we can actually trust the tests. |
Thanks both of you for this great discussion on compilation, as our WIT compilation story is obviously complex, due to our reliance on TB build rules which were required to support the complex build story for both TB and WIT/Facets (TS, JS, protos, goog.require, dep graphs, ...). @kevinrobinson for now I think at the very least it makes sense to document the ability to use compilation level BUNDLE during development in our DEVELOPMENT.md. Do you wish to make that PR? |
hello! When doing development, what's a good workflow? I'll share what I discovered and hope that this helps other folks new to the repo, or perhaps they can help me understand better ways to approach this. The TDLR is
compilation_level = "BUNDLE"
seems useful for development :)I started with the web demo for the smiling classifier, and it seems the way the build works, changes to the filesystem aren't detected and you have to kill the process and then run the full vulcanize process on each change. This takes about a minute on my laptop, so that's what prompted me to look into this.
In the Bazel output I see:
The vulcanizing step takes about a full minute just to rebuild when changing just the text in line of logging. To understand what it's doing, I read through the BUILD files and the related
.bzl
definitions over in TensorBoard. I noticed that there are references to other libraries in https://github.com/PAIR-code/what-if-tool/blob/master/wit_dashboard/demo/wit-image-demo.html#L19, and figured maybe that's why the build is taking so long.Removing those cut the build time down to ~45 seconds.
Removing
wit-dashboard
cuts the build down to ~15 seconds.Removing everything but just the Polymer bits brings the build build to ~2 seconds.
Stepping back, I see that the
wit-dashboard
includes dependencies in the BUILD file from Polymer, Facets, and TensorBoard (as well as WIT components). If I comment out the WIT dependencies from the BUILD file and from the<link />
tags inwit-dashboard.html
, this still takes ~40 seconds to build. So it seems to me like most of the build time, even on just changing text in a console.log statement is from either re-compiling dependencies, or from whole-program optimization the Vulcanize task is doing (or maybe that Closure compiler is doing on its behalf).I tried copying the
vulcanize.bzl
from TensorBoard into the WIT folder so I could look at it and understand what it's doing. But in the process, I noticed some params in the BUILD task that ultimately does the vulcanizing:Changing
compile = False
cuts the build to 2 seconds! But it doesn't work because somewhere in the project there aregoog.require
style dependencies.Changing the
compilation_level
helps though! I found these options in the Closure compiler, and luckily the build task in TensorBoard that calls Closure passes those right along. This gets things working again and down to ~20 seconds. The Closure Bazel defs say to use WHITESPACE_ONLY but that it will disable type checking (https://github.com/bazelbuild/rules_closure/blob/4925e6228e89e3b051a89ff57b8a033fa3fb9544/README.md#arguments-2). This helps (~10 seconds) but breaks the app. The Closure docs don't mention BUNDLE but you can see it in the source:And using this takes like half the time to build as compared to SIMPLE_OPTIMIZATIONS.
In the end, this is the impact on my sweet 2012 MacBook Pro:
So, I'll do this locally now, but would also love to learn if there are better ways to do this :)
Alternately, I also poked around to see if there was a way to either update these calls to listen to a command line arg or env variable passed through
bazel run
. I skimmed the Bazel docs and issues and saw things like aspects andbazelrc
but nothing seemed fast and direct. I suppose this could be done in TensorBoard in thetensorboard_html_binary
task. But I also discovered that there's aWORKSPACE
andworkspace.bzl
tasks here, so maybe that could be a place to add a layer of indirection so that the project calls intotensorboard_html_binary_wrapper
that reads some env switch so it builds for production by default, but if you dobazel run whatev --be-faster
then it can run Closure compiler without the slower advanced optimizations. If doing something like that is helpful I can try but attempting changes to the build setup are always dicey :)If that's too much of a pain I can just add a note to https://github.com/PAIR-code/what-if-tool/blob/master/DEVELOPMENT.md to help folks discover how to speed up local builds. Thanks!
EDIT: Also noticed that a long time ago @stephanwlee was thinking about this upstream tensorflow/tensorboard#1599 and some other open issues reference related things about advanced compilation mode in dev (eg, tensorflow/tensorboard#2687)
The text was updated successfully, but these errors were encountered: