Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

serialize does not seems to work #78

Closed
pcaneill opened this issue Mar 31, 2015 · 10 comments · Fixed by #83
Closed

serialize does not seems to work #78

pcaneill opened this issue Mar 31, 2015 · 10 comments · Fixed by #83
Assignees

Comments

@pcaneill
Copy link

I am trying to use the plugin with pylint checkers and I have some problems.

I am trying to schedule a checker job each time I save my file.
Since I am working on huge python files, it takes 10 second for pylint to do the job. (100% CPU from pylint while working)

If in I do in 1 second:

:w
:w
:w
:w

It will launch and run 4 jobs (4 pylint running => 4 * 100% CPU)
Since I set neomake_serialize to 1 I expected to have only one job running.

Here is my vimrc

let g:neomake_python_enable_makers = ['pylint']                                                                                                                                                                                                                                                                                
let g:ycm_show_diagnostics_ui = 1                                                                                                                                                                                                                                                                                              
let g:neomake_open_list=1                                                                                                                                                                                                                                                                                                      

let g:neomake_serialize_abort_on_error=1                                                                                                                                                                                                                                                                                       
let g:neomake_serialize=1                                                                                                                                                                                                                                                                                                      

autocmd BufWritePost *.py :Neomake pylint   

Moreover, I was wondering if it was possible to cancel a job when a new job start.
My use case would be:

:w   => start pylint (job1)
make some changes
:w   => cancel job1 and start pylint (job2)

Maybe I should create an issue for that?

@Neki
Copy link
Contributor

Neki commented Mar 31, 2015

From my understanding, the goal of neomake_serialize is not to cancel previous unfinished jobs.

If you have two makers e.g. let g:neomake_python_enable_makers = ['pylint', 'frosted'], then setting neomake_serialize to 1 will run pylint once frosted has returned instead of starting both checkers at once.

Previous jobs are indeed not cancelled when :Neomake is called again. We could add a separate option to do this.

@pcaneill
Copy link
Author

@Neki Okay, but here in my first use case I save 4 times my files and it launch 4 pylint in parallel.

Is it normal with neomake_serialize set to 1?

@Neki
Copy link
Contributor

Neki commented Mar 31, 2015

Yes, because saving four times your file trigger four separate :Neomake commands. Each command spawns a job, and :Neomake does not cancel previous jobs even if neomake_serialize is set to 1. The goal of this option is not to cancel jobs.

In fact, if you have only one checker for your file (in this case, pylint), neomake_serialize has no effect. What I think you need is another option that tells neomake to cancel unfinished jobs when you call :Neomake.

@Neki
Copy link
Contributor

Neki commented Mar 31, 2015

If you have two configured makers (pylint and frosted for instance), then this option prevents :Neomake to spawn two jobs (pylint + frosted) when you call :Neomake. It does not affect the interaction between different :Neomake calls.

@pcaneill
Copy link
Author

Ok I understood.

I thought that neomake-serialize serialized different jobs, but no it serialized in a job the different makers.

How about :

  • neomake-serialize-makers (actual behavior)
  • neomake-serialize (no more than one job at the time)

And a new option that would do smth like this:

  • if we run a maker job on a file and that their already is a maker running for that file, kill the first one and start the second job

@benekastah
Copy link
Collaborator

Actually there is a bug here, but not with the serialize feature as @Neki correctly pointed out. When Neomake runs a maker, it should cancel any outstanding jobs created by that maker. The intent is that there is only at most one job per maker active at any given time (and that the job is always the most up-to-date one). In the code (see earlier link), it looks like we are canceling the old job shortly after creating the new one, which should perhaps be corrected. Beyond that, I'm not sure what the issue is at the moment.

@pcaneill
Copy link
Author

My issue is that if I save 4 times my file I have 4 pylint running for several seconds. And I just want the last one to be running.

When you say shortly after, does it means seconds?

@Neki
Copy link
Contributor

Neki commented Mar 31, 2015

After reading the code again (sorry I missed the part where jobs should be cancelled), it means "after a few lines of code" - so it is more like milliseconds.

I would investigate but I am bitten by neovim/neovim#2309 too, so neomake does not work at all for me now.

@benekastah
Copy link
Collaborator

Let me know how this fix works for you. The new job kills the old job, and then waits until the old job exits and is cleaned up before running. It seems to be working fine for me and should guarantee that you never have two jobs from the same maker going at once.

@pcaneill
Copy link
Author

The fix works for me.

Sorry for the late response I was in vacation.

Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants