Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apparent performance test regression in JS Quick Edit when Tern doesn't find results #12294

Open
core-ai-bot opened this issue Aug 31, 2021 · 18 comments

Comments

@core-ai-bot
Copy link
Member

Issue by njx
Wednesday May 22, 2013 at 06:08 GMT
Originally opened as adobe/brackets#3961


  1. Run the JS Quick Edit Performance test (under the Performance tab) in sprint 24 and in sprint 25

Result: Major performance regression in sprint 25--from a few hundred ms in sprint 24 to >2 seconds each in sprint 25.

It looks like the case we're testing is one where Tern doesn't find anything, so is falling back to the old code. But it seems to take a long time to figure out that it can't find anything.

@core-ai-bot
Copy link
Member Author

Comment by njx
Wednesday May 22, 2013 at 06:16 GMT


@jeffkenton - hmm, looks like I can't assign this directly to you. I forgot that we had this performance test when reviewing your pull request, so we didn't notice this until we were running tests at the end of the sprint. It would be good to find out why Tern is taking so long to fail on every invocation--it seems like even if it takes awhile to start up on the first time through, we should have some cached info for the later invocations.

@core-ai-bot
Copy link
Member Author

Comment by njx
Monday Jun 03, 2013 at 18:42 GMT


Reviewed. Medium priority, to@jeffkenton for sprint 26.

@core-ai-bot
Copy link
Member Author

Comment by jeffkenton
Monday Jun 03, 2013 at 19:29 GMT


I just tried removing my recent Tern jump-to-definition code from QuickEdit and tested this. I used the brackets.js file and searched for the function "always()", which Tern doesn't find. Without the jump-to-definition code, time taken was 3 seconds. With the jump-to-definition code it was only 100ms longer. I will look further.

@core-ai-bot
Copy link
Member Author

Comment by njx
Monday Jun 03, 2013 at 19:32 GMT


Hmm. Maybe there's just something wrong with the perf test. I did try this manually and it seemed noticeably longer with Tern, but I've noticed some inconsistency in the perceived amount of time it takes to open Quick Edit with the Tern stuff so it's possible there's something else going on as well.

@core-ai-bot
Copy link
Member Author

Comment by jeffkenton
Monday Jun 03, 2013 at 19:39 GMT


One thing further. Doing the same test a second time (without closing Brackets) gets results much faster in both cases -- approximately 450 ms, using the "Show Performance Data" tools.

Question: how did you disable Tern while doing this? Did you comment out the code, or do you have a trick I didn't think of?

@core-ai-bot
Copy link
Member Author

Comment by njx
Wednesday Jun 05, 2013 at 00:49 GMT


I just went back to sprint 24 to compare, I think.

@core-ai-bot
Copy link
Member Author

Comment by jeffkenton
Wednesday Jun 05, 2013 at 12:28 GMT


I just ran the following test with newly downloaded Brackets (on a Mac), both with Sprint 24 and Sprint 25:

  1. open brackets.js
  2. scroll to line 200
  3. position the cursor to the call to "always()".
  4. QuickEdit.
  5. Look at "Show Performance Data".

The performance results are about 4 seconds for QuickEdit in both cases on my machine. I don't see what you are reporting. Note that in both cases, QuickEdit reads in about 1500 extra files to find various definitions of "always()". Jump-To-Definition doesn't find "always()".

Running the performance tests, I see identical results -- about 1.25 seconds -- for both Sprint 24 and Sprint 25.

It looks like we're not doing the same thing, somehow.

@core-ai-bot
Copy link
Member Author

Comment by njx
Wednesday Jun 05, 2013 at 14:30 GMT


Weird. I'll spend some time today to try to verify what I thought I was seeing.

@core-ai-bot
Copy link
Member Author

Comment by njx
Thursday Jun 06, 2013 at 00:06 GMT


I can definitely reproduce the performance test discrepancy (note that this is not from "Show Performance Data", but from the JSQuickEdit test in the Performance tab). In sprint 24, the first JavaScript Inline Editor Creation entry is about 600ms, and all the others further down are less than 100ms. In sprint 25, all of the JavaScript Inline Editor Creation entries are over 1800ms.

However, I'm starting to suspect that this test is misleading somehow. If I try to reproduce it manually (by opening jquery.effects.core.js and putting the cursor in the extend call on line 272, then doing Cmd-E), I get roughly similar times in both sprint-24 and sprint-25, both perceived and by looking at "Show Performance Data"--roughly 4s. And if it is actually taking 4s to open the inline editor, then it's not clear why the test should have <100ms timings in sprint 24. So it must not be actually timing what we think it's timing.

I also had to modify the test in sprint 25 to get it to run with the new code, which assumes the cursor is actually in the function name we're trying to find, whereas the old code ignored the editor contents and just looked for the specified function.

@jasonsanjose - do you know who wrote this perf test originally? Seems like we might need to figure out what's going on and try to get better timings.

@core-ai-bot
Copy link
Member Author

Comment by jasonsanjose
Thursday Jun 06, 2013 at 19:02 GMT


I wrote those perf tests. I haven't looked at the new tern code yet to see what needs an update though.

@core-ai-bot
Copy link
Member Author

Comment by njx
Thursday Jun 06, 2013 at 20:01 GMT


It seems like it must have been wrong even for the pre-tern version, though, because if I do a similar lookup manually in sprint 24 it takes 4s the first time, whereas the perf test shows ~600ms.

@core-ai-bot
Copy link
Member Author

Comment by njx
Thursday Jun 06, 2013 at 20:02 GMT


At this point I think we should punt this out of Sprint 26 (and maybe reassign it to@jasonsanjose?). There's enough evidence that there's no significant regression when actually using Quick Edit in the product and that this might just be an artifact of the perf test.

@core-ai-bot
Copy link
Member Author

Comment by njx
Friday Jun 07, 2013 at 17:19 GMT


Moved to sprint 27.

@core-ai-bot
Copy link
Member Author

Comment by njx
Friday Jun 07, 2013 at 17:19 GMT


Updated title to indicate that this is probably an issue with the perf test rather than a real regression.

@core-ai-bot
Copy link
Member Author

Comment by jasonsanjose
Tuesday Jun 18, 2013 at 20:44 GMT


I manually tested the quick edit lookup a several times in sequence. At one point, the inline editor didn't open at all. So I tried again and noticed the recursive test error message: Recursive tests with the same name are not supported. Timer name: INLINE_WIDGET_OPEN.

My guess is that the promise from ScopeManager.requestJumptoDef() is neither resolved nor rejected. Not sure if this could be related to the performance measurements being so far off. I'll keep digging.

@core-ai-bot
Copy link
Member Author

Comment by jasonsanjose
Tuesday Jun 18, 2013 at 22:20 GMT


The perf data is accurate, but the behavior differs between Brackets and the test 👻!

When I manually test quick edit in sequence, I fall through to the old _findInProject code every time.

When running the perf test, it only falls through to _findInProject the first time. The remaining 4 times Tern actually picks up the definition in /Applications/Brackets Sprint 27.app/Contents/dev/src/extensions/default/JavaScriptQuickEdit/unittest-files/jquery-ui/jquery-1.7.2.js.

So....a few different issues:

  1. Why does the perf test behave differently than a manual test?
  2. Should Tern pick up this definition at all?
  3. If Tern is picking up the definition, why didn't that happen the first time around?
  4. If Tern finds the definition, should it get cached for subsequent queries?

@jeffkenton any thoughts? To reproduce my findings during the performance test, I had to bump the timeout in JavaScriptQuickEdit/unittests.js:544 to 20000ms.

  1. Open the unit test window
  2. Open dev tools for the unit test window
  3. Set a breakpoint on JavaScriptQuickEdit/unittests.js:540 (this gives us the time to open up dev tools for the next step)
  4. Go to http://localhost:9234 and open dev tools for the testWindow
  5. In the testWindow dev tools instance, set a conditional breakpoint in JavaScriptQuickEdit/main.js:119: console.log(jumpResp.fullPath),false to print out the Tern result for each query
  6. Go back to the unit test dev tools window, remove the breakpoint on line 54, then resume

@core-ai-bot
Copy link
Member Author

Comment by dangoor
Tuesday Jul 23, 2013 at 15:57 GMT


@jasonsanjose I think I can answer your questions...

Why does the perf test behave differently than a manual test?

Granted this was a month ago and you may not remember exactly what you did, but when you manually tested did you open extensionPath + "/unittest-files/jquery-ui" as your project, or did you try it with Brackets as your project? The code hinting has heuristics that determine where it's going to look for other files and how many files it will look at and that could make a difference between your manual tests and the performance test.

Should Tern pick up this definition at all?

Yes, the Quick Edit is for jQuery.extend and that's pretty unambiguous.

If Tern is picking up the definition, why didn't that happen the first time around?

Tern probably hadn't "warmed up". It loads everything up in a background thread and my guess is that it wasn't quite ready.

If Tern finds the definition, should it get cached for subsequent queries?

Sort of. The code hinting code does remember the definitions it has found previously, so it won't need to reparse all of those files the next time around.

If there's a way for the test to know when Tern is ready and run then, that would be useful. I'm not sure what we're trying to measure here, though. When people are using JS Quick Edit, there are the two common cases:

  1. definition can be found by Tern (fast)
  2. have to do Find in Project (slow, today at least)

Would our ideal be to have a test for each of these cases?

@core-ai-bot
Copy link
Member Author

Comment by dangoor
Tuesday Jul 23, 2013 at 17:37 GMT


Removed from sprint 28. I'm thinking this should be "Low Priority" because I don't think there is a real performance regression here, rather just some inconsistent behavior with how the test interacts with the newer quick edit implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant