New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use explicit keys for often accessed closures #1591
Conversation
Adds unnecessary overhead in a perf-critical case Also cache the hashCode
62863bd
to
5ec1c0e
Compare
@@ -2904,7 +2905,8 @@ abstract class ModelElement extends Canonicalization | |||
// library.fullyQualifiedName seems somehow to break things. | |||
//String get fullyQualifiedNameWithoutLibrary => _memoizer.memoized2(fullyQualifiedName.replaceFirst, "${library.fullyQualifiedName}.", ''); | |||
String get fullyQualifiedNameWithoutLibrary => | |||
_memoizer.memoized(_fullyQualifiedNameWithoutLibrary); | |||
_memoizer.memoized(_fullyQualifiedNameWithoutLibrary, | |||
altKey: 'fullyQualifiedNameWithoutLibrary'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is hashing a string really noticeably faster than hashing a tearoff method?
I'll mention that the point of the memoizer was to reduce boilerplate at call sites. This is adding more boilerplate. I'd like to see some evidence of a large performance win (say, at least 10% total runtime performance improvement outside of checked mode of flutter) before paying this cost. |
I see over a doubling in speed.
Check observatory when running – over half the time is calculating hashcode
on Closures.
…On Thu, Jan 11, 2018 at 9:02 AM, jcollins-g ***@***.***> wrote:
I'll mention that the point of the memoizer was to reduce boilerplate at
call sites. This is adding more boilerplate. I'd like to see some evidence
of a large performance win (say, at least 10% total runtime performance
improvement outside of checked mode of flutter) before paying this cost.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1591 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABCiiM4sE6KRT4orUUon_IFOjtsTu9Dks5tJj6KgaJpZM4RaNgF>
.
|
Last master build: 1130s for SDK gen https://travis-ci.org/dart-lang/dartdoc/jobs/327443617#L632 |
It never occurred to me that a hashcode for a tearoff closure could take a non-trivial amount of time. But as you point out, it does seem to. I don't like the solution of introducing new keys though; perhaps in this instance we should just drop the caching. But that brings up the larger question of whether the memoizer concept itself is even doable in Dart due to this issue. Just thinking here, but perhaps we should revert the memoizer application until that's determined? |
Maybe? An option that should be cheaper at runtime (but more code work) would be using |
I think what I'd like to do is to pretty much always pass a string key. I figured it might be a little slower but this is pretty ridiculously slow. Looks like the hashable list change is pretty valuable too, that's a sizeable chunk of the improvement even if it isn't most of it. OK. Given the extraordinary impact of this change I'll defer my nits on whether I like some of the structure of it until later -- that'll motivate me to find a solution I like better and works for our users. :-) Approved. |
Adding @mraleph to this as Devon indicated he might be interested in why hashing tearoff closures is extremely slow. |
I have looked into the issue with One important thing to keep in mind about the line of code like: String get fullyQualifiedNameWithoutLibrary =>
_memoizer.memoized(_fullyQualifiedNameWithoutLibrary,
altKey: 'fullyQualifiedNameWithoutLibrary'); is that VM allocates a new So when you later do Now given how hot this code is I recommend to not use My recommendation would be to do this: diff --git a/lib/src/model.dart b/lib/src/model.dart
index f2c08702..f1f9deef 100644
--- a/lib/src/model.dart
+++ b/lib/src/model.dart
@@ -2917,14 +2917,16 @@ abstract class ModelElement extends Canonicalization
String get fileName => "${name}.html";
+ String _fullyQualifiedName;
+ String _fullyQualifiedNameWithoutLibrary;
+
/// Returns the fully qualified name.
///
/// For example: libraryName.className.methodName
@override
- String get fullyQualifiedName => _memoizer.memoized(_buildFullyQualifiedName,
- altKey: 'fullyQualifiedName');
+ String get fullyQualifiedName => _fullyQualifiedName ??= _buildFullyQualifiedName();
- String _fullyQualifiedNameWithoutLibrary() {
+ String _buildFullyQualifiedNameWithoutLibrary() {
// Remember, periods are legal in library names.
return fullyQualifiedName.replaceFirst(
"${library.fullyQualifiedName}.", '');
@@ -2934,8 +2936,7 @@ abstract class ModelElement extends Canonicalization
// library.fullyQualifiedName seems somehow to break things.
//String get fullyQualifiedNameWithoutLibrary => _memoizer.memoized2(fullyQualifiedName.replaceFirst, "${library.fullyQualifiedName}.", '');
String get fullyQualifiedNameWithoutLibrary =>
- _memoizer.memoized(_fullyQualifiedNameWithoutLibrary,
- altKey: 'fullyQualifiedNameWithoutLibrary');
+ _fullyQualifiedNameWithoutLibrary ??= _buildFullyQualifiedNameWithoutLibrary();
String get sourceFileName => element.source.fullName; On my measurements it shaves away another ~80seconds from the docs generation time. I can send you this patch as a PR. [Thanks for CCing me. Please don't hesitate to ask me any sort of performance questions! I am always happy to help with performance tuning. At the moment my capacity is a bit stretched due to Dart 2.0 migration - but after we get past that we can dig deeper into dartdoc performance on the VM] |
@mraleph Thanks for the look! Your suggestion is pretty much what this code used to do, but the combination of the VM allocating new tearoff objects and hashCode being slow surprised me (I've only been writing dart for a year or so). I wrote the memoizer partially to clear up some boilerplate, but also to be able to add asserts later in checked mode for when things that were supposed to be static changed values underneath the cache. (See dartdoc/lib/src/model_utils.dart Line 225 in 52d46a4
I think for now I'm going to revert the _memoizer change entirely; there's just too much of a hit to keep using it. |
No description provided.