Skip to content
This repository has been archived by the owner on Oct 14, 2023. It is now read-only.

[RFC] New parallel compilation infrastructure #1405

Merged
merged 64 commits into from Mar 23, 2017
Merged

[RFC] New parallel compilation infrastructure #1405

merged 64 commits into from Mar 23, 2017

Conversation

gebner
Copy link
Member

@gebner gebner commented Feb 28, 2017

TL;DR This is why you want it:

2017-02-28-152124_1920x1080_scrot

I have been using this branch almost exclusively for the last few weeks months, and it
should be pretty stable now.

There is one nasty memory leak though (that is also in master): open
init/core.lean and standard.lean. Then change something in init/core.lean.
Every change uses an additional few hundred megabytes for me. I would really
appreciate any ideas on how to debug this. So far I've tried valgrind (too slow
on the whole library, small examples don't show any leaks) and heaptrack
(doesn't give any clues either), without success.

New parallel compilation infrastructure

While the current parallel compilation and server mode works pretty well, there
a few issues:

  • Defining tasks in C++ is cumbersome.
  • Message reporting, cancellation, task queues are terribly tightly coupled.
    The code for mt_task_queue is horribly complex at the moment because it has
    lots of responsibilities, and there are probably lots of bugs.
  • The approach requires a hand-rolled task queue, and does not easily transfer
    to other programming languages, or even just multiple Lean instances inside
    the same process (which might reasonably happen when Lean is used as a
    library).
  • We cannot work on a file without compiling its dependencies. The
    dependencies (in particular proof elaborations) are always executed.
  • Similarly, we eagerly compile all reverse dependencies. This is a huge
    problem if you edit init/core.lean and have data/list/comb.lean open at
    the same time.
  • When handling info command we indicate that the server is busy (with the
    "hourglass" icon in Emacs), just because the command spawns a task.

This PR solves all of these issues. I think a good way to view the problem is
to consider 4 very similar, but subtly different graphs:

  1. Task dependency graph. (Contains an edge if a task needs to be run after
    another one. For example, type-checking must run after elaboration.)
  2. Module dependency graph. (Reflects the import statements.)
  3. Message tree. (How messages can be reused. When reparsing a file, we
    keep the messages form the old proof elaborations around, and only replace
    them once they are re-elaborated as well.)
  4. Cancellation graph. (Which tasks need to be cancelled when we reload a
    file?)

These are now separate and mostly explicitly represented. Currently tasks have
metadata which we use for 3. and 4., this metadata is replaced by explicit data
structures.

Additionally, this PR adds the following new features:

  • By default, we only check the visible lines. You can also choose to check
    nothing, or the complete visible files.
  • Greatly reduced number of sorry warnings.
  • The command-line exit status is 1 if and only if an error was reported. This
    matches the error reporting in Emacs. Concretely, we missed two error in the
    run tests due to the current mismatch: quote1.lean and exact_perf.lean.
    @leodemoura quote1 was an easy fix, but exact_perf.lean reports an error
    because try { exact h2 } fails, and h2 has an elaboration error. You
    added this error message at
    add5266
    Any ideas?

I have removed all incremental message/tasks updates from the server. The
server resends all running tasks / all messages every time (about every 100ms).
(See #1364 for how we arrived at this approach.)
However these are only sent for the region that is checked: for example if
you choose "check visible lines", then you only see errors for the visible lines.

There are two backwards-compatible changes in the server protocol:

  • The additional_message command got removed.
  • There is now a roi (region of interest) command:
{
  "seq_num": 45,
  "command": "roi",
  "files": [
    {
      "file_name": "/home/gebner/lean/library/init/core.lean",
      "begin_line": 1,
      "end_line": 69
    }
  ]
}

Task dependency graph (and tasks in general)

As before, the task dependency graph is specified by the dependencies of each
task. Every task has a method that computes a list of its dependencies:

virtual void get_dependencies(buffer<gtask> &) {}

The reason this is a method is because the dependencies can change dynamically.
Consider for example a dependency on task<task<expr>>, here the
get_dependencies method would first return the outer task, and then both the
outer and inner task once the outer one is finished.

Since tasks no longer require so much metadata, it is feasible to construct
them with lambdas, including nice combinators for mapping, etc.:

add_library_task(map<unit>(error_already_reported(), [_d] (bool already_reported) {
    if (!already_reported && has_sorry(_d)) {
        report_message(...);
    }
    return unit();
}).depends_on(_d.is_theorem() ? _d.get_value_task() : nullptr));

(The snippet above causes sorry to be only reported at most once per
declaration, and only if no other error was reported. It is also completely
deterministic.)

Tasks now have the the type task<T> (much easier to type). Tasks are
typically constructed using the task_builder<T> helper. A common operation is to
wrap the execution function, e.g. to set thread-local variables. This is
accomplished using the .wrap() function; implementation-wise, each wrapper
adds a new heap-allocated gtask_imp object that delegates the execution and
dependency methods to the previous gtask_imp. (We could avoid the
heap-allocation with some template trickery, but I'd rather wait for C++14 for
that--it has the equivalent of Rust's impl Trait feature.)

Dependencies are usually constructed with combinators as well, and are tasks as
well (with the generic gtask type).

In general, tasks are now no longer submitted immediately. They are stored in
the log_tree for lazy evaluation--see the corresponding section for details.

Module dependency graph

This part remains essentially unchanged. In the module_mgr every module has
a list of dependencies. We might want to use the cancellation graph to
invalidate modules as well, but this has the (positive?) side-effect that
invalidated modules are immediately cancelled.

I'm waiting with this refactoring until we have the parser monad.

Message tree (log_tree)

The log_tree is a mutable, concurrent data structure. It stores produced
messages, the info managers, as well as the tasks that need to be executed to
produce these messages. More concretely, a node of the log_tree has the
following fields:

name_map<node> m_children;

std::vector<log_entry> m_entries;  // e.g. messages

location m_location;
std::string m_description;

gtask m_producer;

Every node has an associated location (file name + position range). The tree is
structured in a way such that the children's location is always a subset of the
parent's location. We could enforce this invariant, as well as that messages
need to be contained in the node's location. This would solve the problem where
we run a command, and report error messages in a different part of the file.

The tree is designed for lazy evaluation: we first insert nodes that contain
unsubmitted tasks. The tasks are then submitted by a separate listener, and only
when the editor/command-line requests the error messages for that location. When
a task is finished, it resets the producer field in its node to null. We only
modify nodes where an ancester has a non-null producer. Nodes that have a
non-null producer are shown as in-progress in the editor.

When sending information to the editor, we simply traverse this tree. (Recall
that we only send messages for the currently visible region, so this is pretty
efficient. We don't even look at subtrees whose location is disjoint.)

std::vector<message> msgs;
m_lt->for_each([&] (log_tree::node const & n) {
    if (roi.intersects(n)) {
        for (auto & e : n.get_entries()) {
            if (auto msg = dynamic_cast<message const *>(e.get())) {
                if (roi.intersects(*msg))
                    msgs.push_back(*msg);
            }
        }
        return true;
    } else {
        return false;
    }
});

The progress messages at the command line and in the server mode are obtained by
traversing the log_tree as well, instead of looking at the state of the task
queue.

Tasks are typically inserted by using the add_library_task<T> function.

Cancellation graph

At the moment, we cancel tasks based on their position in the file, by storing
the file name and position in every task, and sending a predicate to the task
queue when we want to cancel something. This PR takes an approach inspired by
vscode, where every command gets an explicit CancellationToken object, where
you can attach listeners. We have a thread-local variable for a
cancellation_token, which can be cancelled and where we can add children.

It is notable to describe the ownership convention: parents hold weak references to
the children, children hold (strong) references to the parents. This ensures,
that as long as you have a cancellation_token, you will receive cancellation
events from every parent, without creating memory leaks (by never freeing
children).

To do

  • Different priorities for the tasks. Currently we prioritize parsing over
    elaboration. This has been lost in the refactoring. It does not seem to be a
    major issue for now.

  • Do not elaborate proofs in dependent modules. With this PR, this would happen
    for a "stupid" reason: we check for every import whether it contains sorry.
    This causes the proofs to be elaborated. We just need to find a clean way to
    skip this sorry check, probably only when "checking visible lines".

  • Once we refactor the parser to process one command at a time, I'd like to
    schedule one parsing task per command. This brings a few immediate advantages:

    • Better visual feedback about the parsing status.
    • We can stop parsing right below the edge of the screen. Parsing is
      automatically resumed when scrolling down.
    • We can reuse snapshots even when we did not finish parsing the file. Right
      now we only get the snapshots after we finish parsing the whole file.

Performance

There is no (significant) performance change in either direction when compiling
the standard library. The editor interface seems to be more responsive now.

@Kha
Copy link
Member

Kha commented Feb 28, 2017

Every change uses an additional few hundred megabytes for me. I would really
appreciate any ideas on how to debug this.

Still reading through the RFC, but https://clang.llvm.org/docs/LeakSanitizer.html might be worth a try.

@leodemoura
Copy link
Member

There is one nasty memory leak though (that is also in master): open
init/core.lean and standard.lean. Then change something in init/core.lean.
Every change uses an additional few hundred megabytes for me.

I tried a few experiments.

  • Disabled memory pools and the small object allocators (they could be fragmenting memory, and/or preventing memory from being reclaimed). No change: memory consumption keeps increasing.

  • Printed amount of memory released by memory pools and small object allocators (in the end of execution). The amount of reclaimed memory increases as memory consumption increases. My point here is that I don't think the issue is due to a memory leak. I think a module may be preventing memory from being reclaimed because it contains references to many live objects. Today, I will add a compilation flag that when enabled will display the total amount of memory reclaimed by all these subsystems.

  • I disabled g_expr_cache_enabled, and other caches. No change: memory consumption keeps increasing.

  • The memory keeps increasing even if I instruct emacs to use -j0 -T0 -M0.

  • I tried valgrind, but it detected only minor leaks (the ones I posted before).

@leodemoura
Copy link
Member

@leodemoura quote1 was an easy fix, but exact_perf.lean reports an error
because try { exact h2 } fails, and h2 has an elaboration error. You
added this error message at
add5266
Any ideas?

I will take a look.

@gebner
Copy link
Member Author

gebner commented Feb 28, 2017

I tried a few experiments.

I tried adding this in module_mgr on file invalidation:

m_modules.clear();

When I check in the debugger, the task queue is empty, there is just a single loaded module (init.core), log_tree is also cleared, yet we're using a gigabyte of memory. I really can't see where we still keep references to all that memory.

@leodemoura
Copy link
Member

When I check in the debugger, the task queue is empty, there is just a single loaded module (init.core), log_tree is also cleared, yet we're using a gigabyte of memory. I really can't see where we still keep references to all that memory.

It could be the thread local caches.
We may consume a lot of memory by storing references to a few different environment objects,

@leodemoura
Copy link
Member

leodemoura commented Feb 28, 2017

I have added the following compilation flags

-D TRACK_CUSTOM_ALLOCATORS=ON

Tracks the amount of memory deallocated by memory_pool and small_object_allocator.

-D TRACK_LIVE_EXPRS=ON

Enable counter for the number of live expr

BTW, Lean displays the amount of deallocated memory and live expressions before/after finalization.

-D CUSTOM_ALLOCATORS=OFF

Disable memory_pool and small_object_allocator.

Here is some data for an example that consumes 1Gb by making simple modifications to tactic.lean while vector.lean is open:

a) with -D TRACK_CUSTOM_ALLOCATORS=ON and -D TRACK_LIVE_EXPRS=ON

memory deallocated by memory_pool and small_object_allocator (before finalization): 117205272
number of live expressions (before finalization): 105
memory deallocated by memory_pool and small_object_allocator (after finalization): 608798768
number of live expressions (after finalization): 0

So, the custom allocators have released 608Mb in the end of the execution. Note that this amount is not including the overhead introduced by malloc. I believe this is evidence that the high memory consumption is not due to a memory leak. 608Mb is a lot of memory. Moreover, when I edited the file manually, the memory consumption was around 250Mb after the first modification.

b) With -D CUSTOM_ALLOCATORS=OFF (memory consumption goes down to 828Mb). However this is expected since we may have unused space in pools for different threads. 828Mb is still a lot of memory.

@gebner
Copy link
Member Author

gebner commented Feb 28, 2017

The compilation flags are nice. I just had to move them around and make them write to stderr, so that I can see them from emacs. I also added an extra output right before thread finalization:

memory deallocated by memory_pool and small_object_allocator (before finalization): 96532824
number of live expressions (before finalization): 4309228
memory deallocated by memory_pool and small_object_allocator (after finalization): 96532824
number of live expressions (after finalization): 4309140
memory deallocated by memory_pool and small_object_allocator (after thread finalization): 173477928
number of live expressions (after thread finalization): 4309140

(This is from while the server is running.) So it is really the thread local data that uses up memory. However all worker threads are stopped as soon as they are idle. The lthread objects are destroyed as well (there was a reference cycle there), but that doesn't help either.

@leodemoura
Copy link
Member

I have just added -D SAVE_INFO=OFF and -D SAVE_SNAPSHOT=OFF.
Then, I tried the following experiment with the setting

-D SAVE_INFO=OFF -D SAVE_SNAPSHOT=OFF -D CUSTOM_ALLOCATORS=OFF

and lean server running with -j2.
Then, I tried the usual example two files (tactic.lean and bitvec.lean), and kept adding/fixing an error in tactic.lean.
Initially memory consumption stabilized at 1.05Gb. Then, I started adding different kinds of errors, and it started to increase again. However, it stabilized many times. I managed to get it to 1.8Gb but it took hundreds of modifications. With the info_manager and snapshots enabled, I get to 2Gb very quickly.

@gebner
Copy link
Member Author

gebner commented Feb 28, 2017

Hmm, apparently all the thread-local data is on the main thread: if I comment out the following line, then delete_thread_finalizer_manager does not change the value of get_memory_deallocated:

    ~thread_finalizers_manager() {
//        finalize_thread(get_pair()); // finalize main thread
        pthread_key_delete(g_key);
    }

That is, I get:

memory deallocated by memory_pool and small_object_allocator (before finalization): 104291536
number of live expressions (before finalization): 125
memory deallocated by memory_pool and small_object_allocator (after finalization): 104291536
number of live expressions (after finalization): 0
memory deallocated by memory_pool and small_object_allocator (after thread finalization): 104291536
number of live expressions (after thread finalization): 0

@Kha LeakSan is awesomely fast, thanks for pointing me to it. However it doesn't detect the leak above.

@leodemoura
Copy link
Member

@gebner I tried to reproduce your last experiment. Here is the data I get (I'm using the master branch)
Before:

% memusg ../../bin/lean -j 2 --server=../../tmp/trace2.txt > /dev/null
memory deallocated by memory_pool and small_object_allocator (before finalization): 375631624
number of live expressions (before finalization): 105
memory deallocated by memory_pool and small_object_allocator (after finalization): 375631624
number of live expressions (after finalization): 105
memory deallocated by memory_pool and small_object_allocator (after thread finalization): 636353912
number of live expressions (after thread finalization): 0
memusg: peak=974244

After I commented the line you have referenced in your message

memusg ../../bin/lean -j 2 --server=../../tmp/trace2.txt > /dev/null
memory deallocated by memory_pool and small_object_allocator (before finalization): 325759920
number of live expressions (before finalization): 105
memory deallocated by memory_pool and small_object_allocator (after finalization): 325759920
number of live expressions (after finalization): 105
memory deallocated by memory_pool and small_object_allocator (after thread finalization): 325759920
number of live expressions (after thread finalization): 105
~/projects/lean/build/track$ memusg: peak=889312

@leodemoura
Copy link
Member

@gebner Here is a conjecture:
1- worker threads allocate many exprs and other kinds of object using the memory_pool.
2- the memory pool for the worker thread is often empty, and it requests more memory using malloc
3- some/most of these object are deallocated by the main thread. These objects are stored in the main thread memory_pool.
4- The main thread memory_pool keeps growing since the main thread doesn't do much. This is why we have a 300Mb in the memory_pool of the main thread in the last experiment.
What do you think?

I will test this conjecture by putting a limit on the memory_pool size.

@leodemoura
Copy link
Member

@gebner Here is the same experiment, but memory_pools can have at most 1024 objects.

memusg ../../bin/lean -j 2 --server=../../tmp/trace2.txt > /dev/null
memory deallocated by memory_pool and small_object_allocator (before finalization): 10004080
number of live expressions (before finalization): 105
memory deallocated by memory_pool and small_object_allocator (after finalization): 10004080
number of live expressions (after finalization): 105
memory deallocated by memory_pool and small_object_allocator (after thread finalization): 13335936
number of live expressions (after thread finalization): 0
memusg: peak=1048732

We are still consuming a lot of memory 1Gb, but the main thread is holding only 3Mb instead of 300Mb during finalization.
When using the editor, the memory seems to stabilize around 1.5Gb. It keeps growing but in a much slower pace, and I hit several plateaus where the memory doesn't increase.

Another thing to keep in mind that there is overhead and fragmentation going on in the malloc provided by the runtime. I will keep running more experiments.

leodemoura added a commit to leodemoura/lean that referenced this pull request Feb 28, 2017
See leanprover#1405

Memory consumption is still high, but I didn't manage to cross the 2Gb
limit anymore with this commit even after hundreds of modifications.

@gebner I'm not seeing a big difference betwee Lean without memory_pool,
with bounded memory_pool and unbounded memory_pool. We may even consider
removing it in the future after a more careful benchmarking.

In the benchmark (https://gist.github.com/leodemoura/b27fb4203a13a67274b388a602149303),
I'm getting the following numbers:

- No memory_pool: runtimes between 3.532s - 3.556s

- With memory_pool bounded by 8192: runtimes between 3.32s - 3.44s

- With memory_pool (with no limit): runtimes between 3.32s - 3.44s

On the other hand, the small object allocator makes a big difference.
I used your list_rev.lean example.

- with:    2.62s
- without: 3.75s
@gebner
Copy link
Member Author

gebner commented Mar 1, 2017

3- some/most of these object are deallocated by the main thread. These objects are stored in the main thread memory_pool.

This is very true. We deallocate all the environments on the main thread, as well as the info managers, etc.

I tried the memory_pool change, but it doesn't change the memory consumption for me. I found a different memory leak though that sometimes happens at the end of a server process, I'm looking into it right now.

@leodemoura
Copy link
Member

I tried the memory_pool change, but it doesn't change the memory consumption for me.

Yes, the memory consumption is high, but it seems to stabilize for me.
Does it still keep growing for you? For me, it stops around 2Gb.
Moreover, it doesn't grow as fast as before. Before the memory_pool commit, every modification would consume between 100Mb and 200Mb.

The commit also seems to address the following observation you made:

Hmm, apparently all the thread-local data is on the main thread: if I comment out the following line, then delete_thread_finalizer_manager does not change the value of get_memory_deallocated:

Do you still observe this behavior after the memory_pool commit?
I cannot observe it in the master branch.

I found a different memory leak though that sometimes happens at the end of a server process, I'm looking into it right now.

Does this leak affect the master branch too?
Valgrind is not reporting any significant memory leak in the master branch when I tried it yesterday.
There were minor (a few kilobytes) in the server code, but they do not explain the crazy memory consumption.

@gebner
Copy link
Member Author

gebner commented Mar 1, 2017

Does it still keep growing for you? For me, it stops around 2Gb.

It still keeps growing for me. I need to test master again to see the difference.

Do you still observe this behavior after the memory_pool commit?

No, the main thread finalizer does not change the deallocation number anymore, so that seems to be fixed.

@gebner
Copy link
Member Author

gebner commented Mar 1, 2017

Does it still keep growing for you? For me, it stops around 2Gb.

Okay, there were quite a few leaks all over the place. Now it stops at 1.5G for me as well.

@gebner
Copy link
Member Author

gebner commented Mar 3, 2017

I've added the region-of-interest support to the vscode extension as well: leanprover/vscode-lean@master...gebner:parallel2

Unfortunately, it's not possible to determine which lines are visible with the current vscode API: microsoft/vscode#14756

@leodemoura
Copy link
Member

@leodemoura quote1 was an easy fix, but exact_perf.lean reports an error
because try { exact h2 } fails, and h2 has an elaboration error. You
added this error message at
add5266
Any ideas?
I will take a look.

I investigated this one. However, I didn't find a good solution yet (only hacks).
Here is the problem:
When we parse tactic notation, we statically know whether we should report errors or not.
Example:

begin
     tac_1, tac_2
           --^ if tac_2 fails, we want the red squiggly line here
end

On the other hand,

begin
     tac_1, try {tac_2}
                --^ we don't want a red squiggly line here if tac_2 fails
end

So, we use the tactic.rstep and tactic.istep combinators for controlling this.
The tactic.rstep line col tac will report the error at the given line and column if tac fails.
The tactic notation module decides which one should be used.
This works for everything, but to_expr.
The elaboration errors are reported by the elaborator depending on the flag report_errors.
Right now, tactic.interactive.exact is defined as

/- doesn't allows metavars and report errors -/
meta def i_to_expr_strict (q : pexpr) : tactic expr :=
to_expr q ff tt -- << the `tt` means report errors

meta def exact (q : parse texpr) : tactic unit :=
do tgt : expr ← target,
   i_to_expr_strict ``(%%q : %%tgt) >>= tactic.exact

@leodemoura
Copy link
Member

Here is one ugly hack to fix the problem above.
We add a combinator tactic.ignore_elab_errors tac which will execute tac without reporting elaboration errors even if tac uses to_expr e ff tt -- << tt here means "report_errors".

@gebner
Copy link
Member Author

gebner commented Mar 3, 2017

I think that in the long run all error messages during tactic execution should be kept in the tactic_state. Then <|> behaves as expected and suppresses error messages when backtracking. Even if we add ignore_elab_errors for try, we'd still have the problem with <|>.

If I recall correctly, the main motivation for rstep to ignore the exception position, and then for to_expr to directly report an error is because of the issue in quote_error_pos.lean: if we call to_expr on a pre-expression from another definition, the we might get elaboration errors at that definition, and not at the tactic use site.
This issue was the reason we now keep track of position ranges (begin and end of the current command) in the log_tree data structure; we could take the position if it lies inside the range (when the pre-expression comes from the current tactic block), or assign it another useful position if it lies outside (when it comes from another definition).

For now, the ignore_elab_erors hack is probably the easiest and would fix the test and the try problem. I'm on vacation for a week, I can solve this afterwards in a nicer way.

@leodemoura
Copy link
Member

I think that in the long run all error messages during tactic execution should be kept in the tactic_state.

This would be bad. The error messages are thunks, because we want to discard them at <|>. This is important for performance. We originally did not use thunks and it was a performance bottleneck.
It is also problematic to store the thunk since we would have to use the thread safe vm_obj.

Then <|> behaves as expected and suppresses error messages when backtracking. Even if we add ignore_elab_errors for try, we'd still have the problem with <|>.

This is a bug in the current support for <|> in interactive mode. I can fix this, and it will work with ignore_elab_errors.

@gebner
Copy link
Member Author

gebner commented Mar 6, 2017

This would be bad. The error messages are thunks, [...]

I did not suggest to switch from thunks to error messages. I was referring to the fact that some operations like add_declaration and to_expr directly report errors that remain after backtracking. And that those error messages should go into the tactic state instead (as thunks).

@leodemoura
Copy link
Member

And that those error messages should go into the tactic state instead (as thunks).

This problematic since to put them into the tactic_state we need to convert the thunks into thread safe vm_objs (which deep copy vm_objs closures and constructors).

@Kha
Copy link
Member

Kha commented Mar 8, 2017

I'm not sure I'm following yet - why would we ever need to report errors immediately or store them in the tactic state, instead of returning them in result.exception?

@gebner
Copy link
Member Author

gebner commented Mar 13, 2017

I'm not sure I'm following yet - why would we ever need to report errors immediately or store them in the tactic state, instead of returning them in result.exception?

In very rare cases we produce warnings instead of errors, for example when adding declarations with the wrong noncomputability annotation. The elaborator can also produce more than one error (when error recovery is enabled). Maybe we want to produce more warnings in the future.

@gebner
Copy link
Member Author

gebner commented Mar 23, 2017

@leodemoura How do we proceed with this branch? BTW, I added your examples as a test case and everything works as expected.

@semorrison
Copy link
Contributor

@gebner, I just tried out this branch with my code. I can get the Lean server to consistently crash, by opening VS Code with just monoidal_categories/monoidal_category.lean open. It parses for a few seconds and then crashes. It's fine on the master branch.

git clone https://github.com/semorrison/lean-category-theory
cd lean-category-theory
git reset --hard 7cf08593e2b1749105d68fc9b88c32a6c6c403a4

@gebner
Copy link
Member Author

gebner commented Mar 23, 2017

@semorrison Thanks for testing! This branch requires an updated version of the vscode extension from https://github.com/gebner/vscode-lean/tree/parallel2
But I just noticed that the updated extension no longer works with the current state of the branch, I'll fix it and notify you when it should work again.

@leodemoura
Copy link
Member

@gebner I can merge it today. Is it ready?

@gebner
Copy link
Member Author

gebner commented Mar 23, 2017

@leodemoura Great. I'm still tracking down the bug reported by Scott. Aside from that the branch is ready.

@gebner
Copy link
Member Author

gebner commented Mar 23, 2017

@leodemoura The bug is fixed, feel free to merge.
@semorrison vscode now works again.

@leodemoura leodemoura merged commit 0d4f829 into leanprover:master Mar 23, 2017
@gebner
Copy link
Member Author

gebner commented Mar 27, 2017

Just so that I don't lose track of it: I think this is the upstream vscode issue that would allow us to implement the "visible lines" mode. microsoft/vscode#588

Edit: here is yet another issue for it. microsoft/vscode#17362

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants