add settings for context lines, show context in match view #847

Merged
merged 10 commits into from Apr 12, 2017

Conversation

Projects
None yet
9 participants
@dirk-thomas
Contributor

dirk-thomas commented Jan 29, 2017

Requirements

Description of the Change

The patch adds two new options searchContextLineCountBefore and searchContextLineCountAfter which are passed down to scandal to return matches with context lines around the matched line. The match view is being modified that each match does not contain a single li but a whole list which itself contains all context lines before, the matching line, and the context lines after.

To allow the context lines and the match line to be distinguished visually the color of the match line has been updated. Also the way the ".selected" item is being determined is updated to highlight the match line rather then the first context line in the sub list.

Alternate Designs

Other approaches:

In contrast to these this PR does not rely on requiring a TextBuffer for each search result. Instead it uses an extended version of scadal to provide the context directly in the returned match objects.

Benefits

It implements the long desired feature described in #190. The approach requires less resources as the listed other approaches.

Possible Drawbacks

The new settings default to 0 lines of context before and after which is the same behavior as previously. But since the view structure has been updated to accommodate the optional context lines that might affect compatibility. I don't have any experience with Atom views and therefore can't estimate how high the chances for this are.

Applicable Issues / Remaining Todos

  • In the current state when the search results list a file which is not opened yet and you click on the result the match is being updated and looses the context information. This is due to the fact that the search is being repeated in the buffer (instead of using scandal through the workspace API. In order to address this the TextBuffer.scan needs to be extended to support the same new options as in the referenced scandal ticket. Therefore I have prefixed the title with [WIP] for now. This has been address with the PR to the text-buffer package.

  • The current settings don't use a custom order attribute and are therefore in alphabetical order. Since it is a bit confusing that the before option comes after the after option it might be good to add explicit order attributes to all options.

  • Add test coverage / update existing specs as soon as there is a consensus on the selected approach and the feature works.

Related Stuff

The following items would all be nice to have but already a problem in the current state and I consider them orthogonal to this PR. They can be improved / addressed in the future in separate PRs:

  • Correct indentation (multiple whitespace in each line are only rendered as a single space)
  • Multiple results in a single line (and now also overlapping lines of context) could be deduplicated
  • Gaps between code in the same file could be expandable

@dirk-thomas dirk-thomas changed the title from [WIP] add settings for context lines, show context in match view to add settings for context lines, show context in match view Jan 31, 2017

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Jan 31, 2017

Contributor

I removed the "[WIP]" prefix from the title since the feature is "complete" now. While there are still failing specs to address I think it would be good to review this first to ensure there is a consensus on the approach.

Contributor

dirk-thomas commented Jan 31, 2017

I removed the "[WIP]" prefix from the title since the feature is "complete" now. While there are still failing specs to address I think it would be good to review this first to ensure there is a consensus on the approach.

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Jan 31, 2017

Contributor

@dirk-thomas Thanks for this! Could you provide screenshots or animated GIFs showing what this new UI look like?

One concern: I'm not sure that a config setting is the best way for the user to specify how many context lines they want. I would think that you'd want to vary these numbers on a search-by-search basis. For me personally, I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for. So I would think that we might actually want some kind of UI for specifying the amount of context.

What do you think?

/cc @simurai @atom/feedback

Contributor

maxbrunsfeld commented Jan 31, 2017

@dirk-thomas Thanks for this! Could you provide screenshots or animated GIFs showing what this new UI look like?

One concern: I'm not sure that a config setting is the best way for the user to specify how many context lines they want. I would think that you'd want to vary these numbers on a search-by-search basis. For me personally, I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for. So I would think that we might actually want some kind of UI for specifying the amount of context.

What do you think?

/cc @simurai @atom/feedback

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Jan 31, 2017

Contributor

I kept the UI changes minimal for now so that the PR is easier to review. Please see the current results with 2 lines of context above and below. The matched line is highlighted in white (new to better distinguish the matched lines from the context line) and the matched text is highlighted with a blue box (as it was before).

search-results-with-context

While the PR only adds package settings for the number of context lines the API is designed to provide the arguments for each query. So it should be easy to customize the UI to configure it for every search. I am not familiar with the UI elements in Atom. Maybe someone could suggest how to setup the additional elements beside the existing search options.

For the search results I could see a few enhancements to improvement the view (specific to the new context):

  • reduce spacing between the context lines belonging to a single result (maybe same spacing as lines in a TextEditor?)
  • more separation between search results (currently only uses a 2px padding - same as before), or a empty line, or a horizontal line for visual separation
  • optionally fold / unfold the context (that might be another way to address the use case you mentioned, where you only sometimes need to see the context)

Besides there a other possible improvements but they also apply to the result without the context:

  • render indentation of lines correctly
  • deduplicate results, if multiple matches are found in the same line
Contributor

dirk-thomas commented Jan 31, 2017

I kept the UI changes minimal for now so that the PR is easier to review. Please see the current results with 2 lines of context above and below. The matched line is highlighted in white (new to better distinguish the matched lines from the context line) and the matched text is highlighted with a blue box (as it was before).

search-results-with-context

While the PR only adds package settings for the number of context lines the API is designed to provide the arguments for each query. So it should be easy to customize the UI to configure it for every search. I am not familiar with the UI elements in Atom. Maybe someone could suggest how to setup the additional elements beside the existing search options.

For the search results I could see a few enhancements to improvement the view (specific to the new context):

  • reduce spacing between the context lines belonging to a single result (maybe same spacing as lines in a TextEditor?)
  • more separation between search results (currently only uses a 2px padding - same as before), or a empty line, or a horizontal line for visual separation
  • optionally fold / unfold the context (that might be another way to address the use case you mentioned, where you only sometimes need to see the context)

Besides there a other possible improvements but they also apply to the result without the context:

  • render indentation of lines correctly
  • deduplicate results, if multiple matches are found in the same line
@50Wliu

This comment has been minimized.

Show comment
Hide comment
@50Wliu

50Wliu Feb 1, 2017

Member

a horizontal line for visual separation

I like this option the best. Just scanning the screenshot it's very hard to tell where a jump in the file occurs.

Member

50Wliu commented Feb 1, 2017

a horizontal line for visual separation

I like this option the best. Just scanning the screenshot it's very hard to tell where a jump in the file occurs.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 1, 2017

Contributor

@50Wliu I added a second commit (747c91e) which simply modifies the css. The result is definitely easier to "parse" visually:

search-results-with-context -with-sep-line

Contributor

dirk-thomas commented Feb 1, 2017

@50Wliu I added a second commit (747c91e) which simply modifies the css. The result is definitely easier to "parse" visually:

search-results-with-context -with-sep-line

@simurai

This comment has been minimized.

Show comment
Hide comment
@simurai

simurai Feb 1, 2017

Member

@maxbrunsfeld I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for.

What about something similar like the "blue expanding button" on github.com. So you could get context for a specific line. Maybe each click increases context by 5 extra lines.

expand

Then there could also be a <input class="input-number" type="number" /> in the header where you can update the config for all lines. So that you don't have to open the package settings every time. Not sure if it needs a separate value for before and after?

number

Scroll position would probably quickly get out of hand.

Anyways, just some ideas. It could also be considered in a follow-up PR to keep this one smaller in scope.

Member

simurai commented Feb 1, 2017

@maxbrunsfeld I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for.

What about something similar like the "blue expanding button" on github.com. So you could get context for a specific line. Maybe each click increases context by 5 extra lines.

expand

Then there could also be a <input class="input-number" type="number" /> in the header where you can update the config for all lines. So that you don't have to open the package settings every time. Not sure if it needs a separate value for before and after?

number

Scroll position would probably quickly get out of hand.

Anyways, just some ideas. It could also be considered in a follow-up PR to keep this one smaller in scope.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 1, 2017

Contributor

@simurai The advantage of this PR over the other approaches is that it relies on scandal to return the context lines together with each match. That is more efficient because it does not require a text buffer to be opened for every result (consider searching through many large files). But as a consequence you can't just add "more" context since that requires to open the file containing the match again and reading the additional lines.

As you mentioned it could be considered in a follow up PR.

Contributor

dirk-thomas commented Feb 1, 2017

@simurai The advantage of this PR over the other approaches is that it relies on scandal to return the context lines together with each match. That is more efficient because it does not require a text buffer to be opened for every result (consider searching through many large files). But as a consequence you can't just add "more" context since that requires to open the file containing the match again and reading the additional lines.

As you mentioned it could be considered in a follow up PR.

@simurai

This comment has been minimized.

Show comment
Hide comment
@simurai

simurai Feb 2, 2017

Member

@dirk-thomas requires to open the file containing the match again and reading the additional lines.

Oh, I see. Ok, another naive idea 😇.. what about always get some extra context and then hide it when initially shown? Maybe keep the context in memory or add to the DOM and hide with CSS? Then when expanding it would be instantly available. Of course, this only makes sense if the extra context doesn't significantly slow down search.

Member

simurai commented Feb 2, 2017

@dirk-thomas requires to open the file containing the match again and reading the additional lines.

Oh, I see. Ok, another naive idea 😇.. what about always get some extra context and then hide it when initially shown? Maybe keep the context in memory or add to the DOM and hide with CSS? Then when expanding it would be instantly available. Of course, this only makes sense if the extra context doesn't significantly slow down search.

@mherodev

This comment has been minimized.

Show comment
Hide comment
@mherodev

mherodev Feb 2, 2017

@dirk-thomas The horizontal rules are a little noisy visually.

How about using ellipses in place of a line number instead?

image

I also snuck in some highlighting on matching line numbers, but don't mind me :)

mherodev commented Feb 2, 2017

@dirk-thomas The horizontal rules are a little noisy visually.

How about using ellipses in place of a line number instead?

image

I also snuck in some highlighting on matching line numbers, but don't mind me :)

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 2, 2017

Contributor

@simurai The optional folding / unfolding of the context was what I tried to suggest earlier.

@mherodev The ellipses will be trivial to implement instead of the horizontal line. I will rather wait until we have a consensus in which direction this should go before updating the PR. Your picture shows well how it could look like.

Contributor

dirk-thomas commented Feb 2, 2017

@simurai The optional folding / unfolding of the context was what I tried to suggest earlier.

@mherodev The ellipses will be trivial to implement instead of the horizontal line. I will rather wait until we have a consensus in which direction this should go before updating the PR. Your picture shows well how it could look like.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 2, 2017

Contributor

@maxbrunsfeld Can you please change the label from "work-in-progress" to "needs-review".

Contributor

dirk-thomas commented Feb 2, 2017

@maxbrunsfeld Can you please change the label from "work-in-progress" to "needs-review".

@bmherold

This comment has been minimized.

Show comment
Hide comment
@bmherold

bmherold Feb 2, 2017

Getting very close to feature parity with Sublime!

The only suggestion I have is to collapse overlapping results into a single block (Sublime does this) which will reduce noise. That may require a separate PR but it's worth mentioning. I attached a screenshot of Sublime for reference.

screen shot 2017-02-02 at 11 43 39 am

bmherold commented Feb 2, 2017

Getting very close to feature parity with Sublime!

The only suggestion I have is to collapse overlapping results into a single block (Sublime does this) which will reduce noise. That may require a separate PR but it's worth mentioning. I attached a screenshot of Sublime for reference.

screen shot 2017-02-02 at 11 43 39 am

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 11, 2017

Contributor

Since the beta of 1.15 was just released it would great if this long awaited feature could be integrated (after a decision on how it should "look" like has been made) in order to become available in the next beta (1.16).

Contributor

dirk-thomas commented Feb 11, 2017

Since the beta of 1.15 was just released it would great if this long awaited feature could be integrated (after a decision on how it should "look" like has been made) in order to become available in the next beta (1.16).

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Feb 11, 2017

Member

@dirk-thomas I say go with the horizontal lines and we can iterate on it once it has been merged.

Member

lee-dohm commented Feb 11, 2017

@dirk-thomas I say go with the horizontal lines and we can iterate on it once it has been merged.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 13, 2017

Contributor

@lee-dohm Sounds good to me. That matches the current state of the patch.

Contributor

dirk-thomas commented Feb 13, 2017

@lee-dohm Sounds good to me. That matches the current state of the patch.

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Feb 13, 2017

Member

@dirk-thomas Everything is good for final review then?

Member

lee-dohm commented Feb 13, 2017

@dirk-thomas Everything is good for final review then?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 14, 2017

Contributor

@lee-dohm I would say yes. If this gets a +1 I would update the specs to match the new html structure if that is ok.

Contributor

dirk-thomas commented Feb 14, 2017

@lee-dohm I would say yes. If this gets a +1 I would update the specs to match the new html structure if that is ok.

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Feb 14, 2017

Member

@dirk-thomas you have a go 🚀

Member

lee-dohm commented Feb 14, 2017

@dirk-thomas you have a go 🚀

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 21, 2017

Contributor

I quick update about the progress: it took me quite a while to understand more of the internals, fix issues of this patch along the way, and update (most of) the specs. I think it is getting pretty close and the only specs still failing are related to the scrolling with core:page-down and core:page-up.

Before the invariant was that each result had the same height (one line). With the context (sometime there is non if the result is at the top or bottom of a file) this is not true anymore. I changed the implementation of selectPreviousPage and selectNextPage quite a bit. The existing specs for this behavior (

it "selects the first result on the next page when core:page-down is triggered", ->
itemHeight = resultsView.find('.selected').outerHeight()
pageHeight = Math.round(resultsView.innerHeight() / itemHeight) * itemHeight
expect(resultsView.find("li").length).toBeLessThan resultsView.getPathCount() + resultsView.getMatchCount()
resultLis = resultsView.find("li")
getSelectedIndex = ->
resultLis.index(resultsView.find('.selected'))
initialIndex = getSelectedIndex()
atom.commands.dispatch resultsView.element, 'core:page-down'
newIndex = getSelectedIndex()
expect(resultsView.find("li:eq(#{initialIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{newIndex})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe pageHeight
expect(newIndex).toBeGreaterThan initialIndex
initialIndex = getSelectedIndex()
atom.commands.dispatch resultsView.element, 'core:page-down'
newIndex = getSelectedIndex()
expect(resultsView.find("li:eq(#{initialIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{newIndex})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe pageHeight * 2
expect(newIndex).toBeGreaterThan initialIndex
_.times 60, ->
atom.commands.dispatch resultsView.element, 'core:page-down'
expect(resultsView.find("li:last")).toHaveClass 'selected'
it "selects the first result on the next page when core:page-up is triggered", ->
atom.commands.dispatch resultsView.element, 'core:move-to-bottom'
itemHeight = resultsView.find('.selected').outerHeight()
pageHeight = Math.round(resultsView.innerHeight() / itemHeight) * itemHeight
initialScrollTop = resultsView.scrollTop()
itemsPerPage = Math.floor(pageHeight / itemHeight)
initiallySelectedIndex = Math.floor(initialScrollTop / itemHeight) + itemsPerPage
expect(resultsView.find("li:eq(#{initiallySelectedIndex})")).toHaveClass 'selected'
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(#{initiallySelectedIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe initialScrollTop - pageHeight
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage * 2})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe initialScrollTop - pageHeight * 2
_.times 60, ->
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(0)")).toHaveClass 'selected'
) are there not really appropriate anymore. Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

Beside that there are a few specs failing with callback is not a function. I currently can't reproduce them locally. Maybe I just have sync my local work with the latest master branches again?

Contributor

dirk-thomas commented Feb 21, 2017

I quick update about the progress: it took me quite a while to understand more of the internals, fix issues of this patch along the way, and update (most of) the specs. I think it is getting pretty close and the only specs still failing are related to the scrolling with core:page-down and core:page-up.

Before the invariant was that each result had the same height (one line). With the context (sometime there is non if the result is at the top or bottom of a file) this is not true anymore. I changed the implementation of selectPreviousPage and selectNextPage quite a bit. The existing specs for this behavior (

it "selects the first result on the next page when core:page-down is triggered", ->
itemHeight = resultsView.find('.selected').outerHeight()
pageHeight = Math.round(resultsView.innerHeight() / itemHeight) * itemHeight
expect(resultsView.find("li").length).toBeLessThan resultsView.getPathCount() + resultsView.getMatchCount()
resultLis = resultsView.find("li")
getSelectedIndex = ->
resultLis.index(resultsView.find('.selected'))
initialIndex = getSelectedIndex()
atom.commands.dispatch resultsView.element, 'core:page-down'
newIndex = getSelectedIndex()
expect(resultsView.find("li:eq(#{initialIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{newIndex})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe pageHeight
expect(newIndex).toBeGreaterThan initialIndex
initialIndex = getSelectedIndex()
atom.commands.dispatch resultsView.element, 'core:page-down'
newIndex = getSelectedIndex()
expect(resultsView.find("li:eq(#{initialIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{newIndex})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe pageHeight * 2
expect(newIndex).toBeGreaterThan initialIndex
_.times 60, ->
atom.commands.dispatch resultsView.element, 'core:page-down'
expect(resultsView.find("li:last")).toHaveClass 'selected'
it "selects the first result on the next page when core:page-up is triggered", ->
atom.commands.dispatch resultsView.element, 'core:move-to-bottom'
itemHeight = resultsView.find('.selected').outerHeight()
pageHeight = Math.round(resultsView.innerHeight() / itemHeight) * itemHeight
initialScrollTop = resultsView.scrollTop()
itemsPerPage = Math.floor(pageHeight / itemHeight)
initiallySelectedIndex = Math.floor(initialScrollTop / itemHeight) + itemsPerPage
expect(resultsView.find("li:eq(#{initiallySelectedIndex})")).toHaveClass 'selected'
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(#{initiallySelectedIndex})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe initialScrollTop - pageHeight
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage})")).not.toHaveClass 'selected'
expect(resultsView.find("li:eq(#{initiallySelectedIndex - itemsPerPage * 2})")).toHaveClass 'selected'
expect(resultsView.prop('scrollTop')).toBe initialScrollTop - pageHeight * 2
_.times 60, ->
atom.commands.dispatch resultsView.element, 'core:page-up'
expect(resultsView.find("li:eq(0)")).toHaveClass 'selected'
) are there not really appropriate anymore. Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

Beside that there are a few specs failing with callback is not a function. I currently can't reproduce them locally. Maybe I just have sync my local work with the latest master branches again?

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Feb 21, 2017

Member

Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

According to the description of the specs, when executing core:page-down/up, the first result on the next/previous page should be at the top. If the "top of the result" is considered to be the first line of context, could you still make the specs rely on pixel positions? I wouldn't want context to be randomly cut off depending on user's settings.

Maybe I just have sync my local work with the latest master branches again?

Might be a good idea to see if they go away.

Member

lee-dohm commented Feb 21, 2017

Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

According to the description of the specs, when executing core:page-down/up, the first result on the next/previous page should be at the top. If the "top of the result" is considered to be the first line of context, could you still make the specs rely on pixel positions? I wouldn't want context to be randomly cut off depending on user's settings.

Maybe I just have sync my local work with the latest master branches again?

Might be a good idea to see if they go away.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Feb 22, 2017

Contributor

Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

According to the description of the specs, when executing core:page-down/up, the first result on the next/previous page should be at the top. If the "top of the result" is considered to be the first line of context, could you still make the specs rely on pixel positions? I wouldn't want context to be randomly cut off depending on user's settings.

I updated the spec to remember the position of the first match (using offset()) and check that after scrolling a page the newly selected item is on the same position. While the approach is very different it achieves the same goal of ensuring the newly selected item is at the top (when scrolling down).

Locally all specs pass for me (using the patches from the three referenced PRs).

Maybe I just have sync my local work with the latest master branches again?

Might be a good idea to see if they go away.

Maybe those can't pass until the other PRs are merged and released. Could that be the case? Or is there another way to let the CI jobs use the referenced PRs for testing?

Contributor

dirk-thomas commented Feb 22, 2017

Is it ok to rewrite these two specs which currently rely on exact pixel coordinates to rather check for the appropriate scrolling direction (without implementing the same logic as the tested code)?

According to the description of the specs, when executing core:page-down/up, the first result on the next/previous page should be at the top. If the "top of the result" is considered to be the first line of context, could you still make the specs rely on pixel positions? I wouldn't want context to be randomly cut off depending on user's settings.

I updated the spec to remember the position of the first match (using offset()) and check that after scrolling a page the newly selected item is on the same position. While the approach is very different it achieves the same goal of ensuring the newly selected item is at the top (when scrolling down).

Locally all specs pass for me (using the patches from the three referenced PRs).

Maybe I just have sync my local work with the latest master branches again?

Might be a good idea to see if they go away.

Maybe those can't pass until the other PRs are merged and released. Could that be the case? Or is there another way to let the CI jobs use the referenced PRs for testing?

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Feb 22, 2017

Member

Things look good to me. @maxbrunsfeld do you want to give a final 👍 ?

Member

lee-dohm commented Feb 22, 2017

Things look good to me. @maxbrunsfeld do you want to give a final 👍 ?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 2, 2017

Contributor

Friendly ping 😉

Contributor

dirk-thomas commented Mar 2, 2017

Friendly ping 😉

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 2, 2017

Contributor

Hey @dirk-thomas, this looks good, but I've changed some things on find-and-replace master. Are you ok with me pushing to your branch to resolve the conflicts?

Contributor

maxbrunsfeld commented Mar 2, 2017

Hey @dirk-thomas, this looks good, but I've changed some things on find-and-replace master. Are you ok with me pushing to your branch to resolve the conflicts?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 2, 2017

Contributor

@maxbrunsfeld Sure, please go ahead. Let me know if I can help with anything.

Contributor

dirk-thomas commented Mar 2, 2017

@maxbrunsfeld Sure, please go ahead. Let me know if I can help with anything.

@caesarsol

This comment has been minimized.

Show comment
Hide comment
@caesarsol

caesarsol Mar 2, 2017

I was missing the feature a lot, just found that you were working on this a few hours ago, can't wait to see it merged in the next few days 😉

Keep up the good work!

I was missing the feature a lot, just found that you were working on this a few hours ago, can't wait to see it merged in the next few days 😉

Keep up the good work!

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 3, 2017

Contributor

@dirk-thomas I have started work on merging the changes from master into this branch. The result is here. I decided to leave it as a separate branch so that you could do with it as you please.

The merge is not easy because we had to remove the use of jQuery from this entire package. In the process, I converted the source files to JavaScript. This is kind of our policy now, as we would like to gradually remove coffee-script from our system. More importantly though, I switched to a virtual-DOM based rendering strategy, which has significantly improved the performance of scrolling through a long list of results.

Status of the merge:

AFAICT, the implementation is merged correctly, but we still need to merge your changes to the (coffee-script) specs into the new (javascript) tests. Could you port over your test changes to the new files?

Sorry for colliding with you here... the removal of jQuery was pressing because it is a prerequisite to this ongoing startup performance work.

Let me know if you have more questions!

Contributor

maxbrunsfeld commented Mar 3, 2017

@dirk-thomas I have started work on merging the changes from master into this branch. The result is here. I decided to leave it as a separate branch so that you could do with it as you please.

The merge is not easy because we had to remove the use of jQuery from this entire package. In the process, I converted the source files to JavaScript. This is kind of our policy now, as we would like to gradually remove coffee-script from our system. More importantly though, I switched to a virtual-DOM based rendering strategy, which has significantly improved the performance of scrolling through a long list of results.

Status of the merge:

AFAICT, the implementation is merged correctly, but we still need to merge your changes to the (coffee-script) specs into the new (javascript) tests. Could you port over your test changes to the new files?

Sorry for colliding with you here... the removal of jQuery was pressing because it is a prerequisite to this ongoing startup performance work.

Let me know if you have more questions!

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 3, 2017

Contributor

@dirk-thomas Another thing we need to address is potential redundancy between adjacent matches' context lines. For example, if a file contains this text:

1| the grey cat
2| scratched the brown dog.
3| cats are mean.

And I search for cat with 2 lines of context, my search results should look like this:

1| the grey cat
2| scratched the brown dog.
3| cats are mean.

Not this:

1| the grey cat
2| scratched the brown dog.
2| scratched the brown dog.
3| cats are mean.

I'm not sure at what level we should address the redundancy. Maybe it should actually be handled in scandal and text-buffer. Or maybe it would make sense to do it in the UI. Thoughts?

Contributor

maxbrunsfeld commented Mar 3, 2017

@dirk-thomas Another thing we need to address is potential redundancy between adjacent matches' context lines. For example, if a file contains this text:

1| the grey cat
2| scratched the brown dog.
3| cats are mean.

And I search for cat with 2 lines of context, my search results should look like this:

1| the grey cat
2| scratched the brown dog.
3| cats are mean.

Not this:

1| the grey cat
2| scratched the brown dog.
2| scratched the brown dog.
3| cats are mean.

I'm not sure at what level we should address the redundancy. Maybe it should actually be handled in scandal and text-buffer. Or maybe it would make sense to do it in the UI. Thoughts?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 3, 2017

Contributor

Regarding the redundancy: please see the example in my original post under "Related Stuff". The problem already exists without this PR when multiple matches are found in the same line, e.g. searching for "the" the same line might appear multiple times:

1| the quick brown fox jumps over the lazy dog
1| the quick brown fox jumps over the lazy dog

Thinking about the following possible UX in the future: first only show the lines containing matches, then "unfold" the context on demand, try to deduplicate where possible. I think this is exclusively a UI problem and should therefore be handled in the view. Neither atom/scandal#41 nor atom/text-buffer#195 should need to know about all of this.

Therefore I would suggest to get this improvement merged as-is (since it is a strict improvement and the shortcomings have been there before). I am happy to work on follow-up PRs which improve the "related"stuff" (which should be limited to changed in this package).

--

I didn't want to bring up the potential problem related to deduplicating lines but since you mentioned the topic I want to briefly dump my thoughts here. In the above example with two matches in the same line it would be "straight forward" to only show the line once and depending on where you click jump to the position where the match starts.

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to. E.g. search in the following line for "OneOne":

1| OneOneOneOne

Clearly the search matches three times:

1| OneOneOneOne
1| OneOneOneOne
1| OneOneOneOne

In separate lines it is trivial to jump to the correct match. For a merged line it becomes ambiguous. You could argue that you jump to the "right" line anyway. But in a perfect world you expect that e.g. the match you clicked on will be selected when you jump to it.

I just wanted to sketch why I think the deduplication might be quite complicated for those corner cases.

Contributor

dirk-thomas commented Mar 3, 2017

Regarding the redundancy: please see the example in my original post under "Related Stuff". The problem already exists without this PR when multiple matches are found in the same line, e.g. searching for "the" the same line might appear multiple times:

1| the quick brown fox jumps over the lazy dog
1| the quick brown fox jumps over the lazy dog

Thinking about the following possible UX in the future: first only show the lines containing matches, then "unfold" the context on demand, try to deduplicate where possible. I think this is exclusively a UI problem and should therefore be handled in the view. Neither atom/scandal#41 nor atom/text-buffer#195 should need to know about all of this.

Therefore I would suggest to get this improvement merged as-is (since it is a strict improvement and the shortcomings have been there before). I am happy to work on follow-up PRs which improve the "related"stuff" (which should be limited to changed in this package).

--

I didn't want to bring up the potential problem related to deduplicating lines but since you mentioned the topic I want to briefly dump my thoughts here. In the above example with two matches in the same line it would be "straight forward" to only show the line once and depending on where you click jump to the position where the match starts.

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to. E.g. search in the following line for "OneOne":

1| OneOneOneOne

Clearly the search matches three times:

1| OneOneOneOne
1| OneOneOneOne
1| OneOneOneOne

In separate lines it is trivial to jump to the correct match. For a merged line it becomes ambiguous. You could argue that you jump to the "right" line anyway. But in a perfect world you expect that e.g. the match you clicked on will be selected when you jump to it.

I just wanted to sketch why I think the deduplication might be quite complicated for those corner cases.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 3, 2017

Contributor

I tried updating the specs but notices that the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view. Not when you e.g. search for "describe" in this package and trigger pageDown multiple times the selected result is not at the top anymore. After a few pages a "random" result in the middle of the view is selected. It is pretty difficult where this behavior broke since the diff is basically unreadable with all the other changes. Maybe you have an idea what you changed during the refactoring?

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Contributor

dirk-thomas commented Mar 3, 2017

I tried updating the specs but notices that the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view. Not when you e.g. search for "describe" in this package and trigger pageDown multiple times the selected result is not at the top anymore. After a few pages a "random" result in the middle of the view is selected. It is pretty difficult where this behavior broke since the diff is basically unreadable with all the other changes. Maybe you have an idea what you changed during the refactoring?

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 3, 2017

Contributor

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

Contributor

maxbrunsfeld commented Mar 3, 2017

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 3, 2017

Contributor

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

So it might become a problem in the future. Once the search mechanism is "fixed" 😉 But we can think about that when that is the case...

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

That is totally fine by me.

Contributor

dirk-thomas commented Mar 3, 2017

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

So it might become a problem in the future. Once the search mechanism is "fixed" 😉 But we can think about that when that is the case...

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

That is totally fine by me.

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 4, 2017

Contributor

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

The current behavior on your branch scrolls by the clientHeight. That makes sense imo since the user expects exactly to see "all new content" without "missing any lines" from what they saw before.

But regarding the newly selected match: Let's assume that the search results all have the same height so the same number of context lines, which means no matches close to the top or bottom of a file. And further assume that the client height is not close to a multiple of each results height but something in between, e.g. 3.5 result heights. So after scrolling a page down the newly selected result must be either ~ half result height further up or down than the previously selected item. Therefore currently the selection slowly drifts when triggering pageDown repeatedly. It seems if the fractional part of the client height / result height is < 0.5 it drifts in one direction, if is > 0.5 it drifts in the other direction.

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

Contributor

dirk-thomas commented Mar 4, 2017

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

The current behavior on your branch scrolls by the clientHeight. That makes sense imo since the user expects exactly to see "all new content" without "missing any lines" from what they saw before.

But regarding the newly selected match: Let's assume that the search results all have the same height so the same number of context lines, which means no matches close to the top or bottom of a file. And further assume that the client height is not close to a multiple of each results height but something in between, e.g. 3.5 result heights. So after scrolling a page down the newly selected result must be either ~ half result height further up or down than the previously selected item. Therefore currently the selection slowly drifts when triggering pageDown repeatedly. It seems if the fractional part of the client height / result height is < 0.5 it drifts in one direction, if is > 0.5 it drifts in the other direction.

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 6, 2017

Contributor

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

Yeah, I agree; it's not ideal. But the page-down experience was never exactly great; until my most recent PR, adding results to the DOM would sometimes cause noticeable lag and inconsistent scrolling. And the issue you're describing here exists on master.

I guess an ideal solution would be to store off some kind of 'goal offset` for the selected match, so that we could continue to round in the correct direction when paging up and down. That said, I don't think we need to do that as part of this PR.

What would you like to do?

Contributor

maxbrunsfeld commented Mar 6, 2017

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

Yeah, I agree; it's not ideal. But the page-down experience was never exactly great; until my most recent PR, adding results to the DOM would sometimes cause noticeable lag and inconsistent scrolling. And the issue you're describing here exists on master.

I guess an ideal solution would be to store off some kind of 'goal offset` for the selected match, so that we could continue to round in the correct direction when paging up and down. That said, I don't think we need to do that as part of this PR.

What would you like to do?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 6, 2017

Contributor

The challenge is that the current behavior is not that easy to test in a spec. I could try to update the specs to check for some kind of valid "range" around there the new selection is supposed to be (probably the old location plus / minus the height of the largest possible result height). If you think this is "good enough" for now we can tackle the scrolling behavior separately if that is already the case on master.

Contributor

dirk-thomas commented Mar 6, 2017

The challenge is that the current behavior is not that easy to test in a spec. I could try to update the specs to check for some kind of valid "range" around there the new selection is supposed to be (probably the old location plus / minus the height of the largest possible result height). If you think this is "good enough" for now we can tackle the scrolling behavior separately if that is already the case on master.

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Mar 6, 2017

Contributor

I think that when testing page-up and page-down, we can just set searchContextLineCountBefore and searchContextLineCountAfter to zero, so that the tests work more or less the same as they do on master. Would that solve the issue?

Contributor

maxbrunsfeld commented Mar 6, 2017

I think that when testing page-up and page-down, we can just set searchContextLineCountBefore and searchContextLineCountAfter to zero, so that the tests work more or less the same as they do on master. Would that solve the issue?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 6, 2017

Contributor

Probably, but then we wouldn't really test that the behavior still does something reasonable with context 😉

Contributor

dirk-thomas commented Mar 6, 2017

Probably, but then we wouldn't really test that the behavior still does something reasonable with context 😉

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 7, 2017

Contributor

I updated the pageUp / pageDown specs significantly. They check now that after executing the page* command:

  • the newly selected result is not the same as before
  • the vertical offset (position within the visible area) of the newly selected result is close to the previous one
  • the vertical position of the newly selected result if greater (pageDown) / lesser (pageUp)

With this state all specs pass for me locally:

277 tests, 854 assertions, 0 failures, 0 skipped

I assume the CI won't pass since the other PRs haven't been merged and released yet?

Contributor

dirk-thomas commented Mar 7, 2017

I updated the pageUp / pageDown specs significantly. They check now that after executing the page* command:

  • the newly selected result is not the same as before
  • the vertical offset (position within the visible area) of the newly selected result is close to the previous one
  • the vertical position of the newly selected result if greater (pageDown) / lesser (pageUp)

With this state all specs pass for me locally:

277 tests, 854 assertions, 0 failures, 0 skipped

I assume the CI won't pass since the other PRs haven't been merged and released yet?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Mar 8, 2017

Contributor

I updated the patch in 47dd521 to use the same option names as in atom/text-buffer#195 (comment).

Contributor

dirk-thomas commented Mar 8, 2017

I updated the patch in 47dd521 to use the same option names as in atom/text-buffer#195 (comment).

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Apr 6, 2017

Contributor

@maxbrunsfeld Is this still being considered for Atom 1.17 beta?

Contributor

dirk-thomas commented Apr 6, 2017

@maxbrunsfeld Is this still being considered for Atom 1.17 beta?

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Apr 6, 2017

Member
Member

lee-dohm commented Apr 6, 2017

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Apr 10, 2017

Member

@dirk-thomas Thanks for all your hard work and patience on this. I talked to the team about this PR in the team meeting this morning. What we would need for this to be merged and go out with v1.17:

  1. Resolve the conflicts that now exist
  2. CI to pass - I understand a suggestion was made to make the tests conditionally run based on Atom version

I understand that CI will pass soon enough as the features that this PR depend on make it out to Stable on their own. So we fully intend to integrate this PR, it just won't be with v1.17 without additional work.

Let me know if you have any questions!

Member

lee-dohm commented Apr 10, 2017

@dirk-thomas Thanks for all your hard work and patience on this. I talked to the team about this PR in the team meeting this morning. What we would need for this to be merged and go out with v1.17:

  1. Resolve the conflicts that now exist
  2. CI to pass - I understand a suggestion was made to make the tests conditionally run based on Atom version

I understand that CI will pass soon enough as the features that this PR depend on make it out to Stable on their own. So we fully intend to integrate this PR, it just won't be with v1.17 without additional work.

Let me know if you have any questions!

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Apr 10, 2017

Contributor

@lee-dohm What is the timeline for 1.17? When does this need to be ready in order to be merged in time?

Regarding the compatibility with stable (old API) as well as the upcoming beta (new API): as far as I can see the patch would require conditionals in many many lines / places. Is that really the intention here? I would argue that it makes the code much worse to read / maintain. What is the appropriate condition to check for the Atom version I should use all over the place?

Contributor

dirk-thomas commented Apr 10, 2017

@lee-dohm What is the timeline for 1.17? When does this need to be ready in order to be merged in time?

Regarding the compatibility with stable (old API) as well as the upcoming beta (new API): as far as I can see the patch would require conditionals in many many lines / places. Is that really the intention here? I would argue that it makes the code much worse to read / maintain. What is the appropriate condition to check for the Atom version I should use all over the place?

@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Apr 10, 2017

Member

The idea is not to place version checks in the code but in the tests. The condition to check would be atom.getVersion() >= '1.17'. We're planning on cutting v1.17-beta very, very soon.

Member

lee-dohm commented Apr 10, 2017

The idea is not to place version checks in the code but in the tests. The condition to check would be atom.getVersion() >= '1.17'. We're planning on cutting v1.17-beta very, very soon.

@maxbrunsfeld

This comment has been minimized.

Show comment
Hide comment
@maxbrunsfeld

maxbrunsfeld Apr 10, 2017

Contributor

@dirk-thomas What I had imagined was something like:

if (parseFloat(atom.getVersion()) >= 1.17) {
  describe('when context lines are enabled', () => {

    // tests for the context line behavior

  })
}

I realize that we'd also need some conditional like this when we call editor.scan, because you made a change to the parameters that method takes. Are there other places where conditionals would be required?

Contributor

maxbrunsfeld commented Apr 10, 2017

@dirk-thomas What I had imagined was something like:

if (parseFloat(atom.getVersion()) >= 1.17) {
  describe('when context lines are enabled', () => {

    // tests for the context line behavior

  })
}

I realize that we'd also need some conditional like this when we call editor.scan, because you made a change to the parameters that method takes. Are there other places where conditionals would be required?

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Apr 10, 2017

Contributor

@lee-dohm I don't know what exactly "very, very soon" means but probably only a few days? I will try but I can't promise to find the necessary spare time to work on updating this PR today (tomorrow?). If anybody else would like to rebase the PR and add the conditions please feel free to do so. It would be really sad if this great feature would be left out for yet another cycle...

Contributor

dirk-thomas commented Apr 10, 2017

@lee-dohm I don't know what exactly "very, very soon" means but probably only a few days? I will try but I can't promise to find the necessary spare time to work on updating this PR today (tomorrow?). If anybody else would like to rebase the PR and add the conditions please feel free to do so. It would be really sad if this great feature would be left out for yet another cycle...

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Apr 11, 2017

Contributor

@lee-dohm @maxbrunsfeld I rebased the patch and used the Atom version to distinguish the signature of the TextEditor.scan function in https://github.com/atom/find-and-replace/pull/847/files#diff-eb24277759b68118e9e07ef629d193e8R250

Contributor

dirk-thomas commented Apr 11, 2017

@lee-dohm @maxbrunsfeld I rebased the patch and used the Atom version to distinguish the signature of the TextEditor.scan function in https://github.com/atom/find-and-replace/pull/847/files#diff-eb24277759b68118e9e07ef629d193e8R250

@lee-dohm lee-dohm merged commit 7ffeecc into atom:master Apr 12, 2017

2 checks passed

continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@lee-dohm

This comment has been minimized.

Show comment
Hide comment
@lee-dohm

lee-dohm Apr 12, 2017

Member

Thanks for all your hard work and perseverance on this @dirk-thomas!

Member

lee-dohm commented Apr 12, 2017

Thanks for all your hard work and perseverance on this @dirk-thomas!

@dirk-thomas

This comment has been minimized.

Show comment
Hide comment
@dirk-thomas

dirk-thomas Apr 12, 2017

Contributor

🚢 Awesome. Looking forward for the upcoming beta!

Contributor

dirk-thomas commented Apr 12, 2017

🚢 Awesome. Looking forward for the upcoming beta!

@cvogt

This comment has been minimized.

Show comment
Hide comment
@cvogt

cvogt Apr 13, 2017

@dirk-thomas thank you so much, you rock!!

cvogt commented Apr 13, 2017

@dirk-thomas thank you so much, you rock!!

@dirk-thomas dirk-thomas deleted the dirk-thomas:match_with_context branch May 2, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment