Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic tests with guard #16

Open
andreareginato opened this issue Oct 3, 2012 · 24 comments
Open

Automatic tests with guard #16

andreareginato opened this issue Oct 3, 2012 · 24 comments
Labels

Comments

@andreareginato
Copy link
Collaborator

Write your thoughts about the "automatic tests with guard" best practice.

@josephlord
Copy link

watchr is a similar alternative that seemed a bit more flexible (or at least easier to use in more flexible way).

@myronmarston
Copy link

I dislike the binary good/bad split here. Guard is a fine tool, and I have used it from time to time, but ultimately, I've found my TDD workflow works best with a vim keybinding that makes it easy to run just the examples I want when I want to...and then I do use a rake task to run the entire suite before pushing code.

It would be a lot more useful it your site spent time discussing why you consider one practice to be good and another to be bad. We don't want people doing stuff just because they've read they should on your site; people should understand the tradeoffs, use their brain, and make appropriate choices for their projects. Here, I think the pertinent point is that rake spec, at least in a rails project, takes much longer before you get feedback from the first spec because it loads the test DB schema, loads the rails environment multiple times (once when loading your rake tasks, and another time when it shells out to run your specs and your spec_helper loads the rails environment).

@Spaceghost
Copy link

I do not use guard in my day to day work, though I've worked with it and have worked with it's predecessor watchr. I even offered a significant amount of effort towards a book it was used in.

I disagree that it's a 'best practice', it's really just a popular helpful tool. To suggest that people use it as a best practice seems whimsical and bad form.

I prefer running tests when I want them, and when I do I use vim and myriad scripts over time to do so. vim-config

I think that this shouldn't be labeled as a best practice, because it really isn't.

@marten
Copy link

marten commented Oct 6, 2012

The biggest problem I have with guard is that it runs all specs whenever I go from red to green. Running all my specs takes about a minute, which is a pretty long time if you're in your TDD zone. The problem is that while it's doing that, it's not running any changed specs, so if you start refactoring after you've gone green, you won't get feedback until the entire suite is done. Only at that point does it start running specific specs again, and you better not break anything while refactoring or the whole things starts all over again.

@andreareginato
Copy link
Collaborator Author

@josephlord I've tried Watchr before Guard and I finally changed to the last one.

@myronmarston That keybinding is something I was looking for a lot, but for me (and generally for rails devs) it would work only with solutions like Spork or Zeus. Do you feel like writing an article that I could link on the specific guideline? As I'm a VIM user and I've felt the one test shot need, I would love to have a clear description of the steps to make it real. Let's discuss more about. For example, it is ok that it runs the selected test, but if I'm in the model, do I have to move back to the test and save it to run the test? I'm really interested in this topic.

@Spaceghost The myriad script is just great, but I'm not sure it could make my workflow faster. Let's discuss about it.

@marten As you can see in this gist you can easily change that setting. More in detail add all_after_pass: false

@Spaceghost
Copy link

I feel like spork is a smell for running tests in parallel. If all you want is to have a rails environment hot all the time, then you should really only be running integration and functional tests in it. Unit tests should not need the whole environment. That's just poor craftsmanship.

@andreareginato, It's two more keystrokes in your workflow that run that test, and if you want to run the directory, it's two more. I used and liked guard for a while, even used the notification stuff. But I feel that now I'm more interested in being intentional about how I run tests instead of running them automatically. I feel like it gives me time to cognitively assess my tests and code instead of just running them continuously.

I automate builds entirely, with different pipelines for each suite of tests. Unit, integration, and functional.

My argument isn't against guard, I like it and consider using it sometimes. I don't like spork all that much, but it can be useful if you use it properly. What I'm saying is that it's two tools that are nifty, but not best practice.

@andreareginato
Copy link
Collaborator Author

I agree @Spaceghost. That's way now they are "only" guidelines. I'll start thinking about your interesting ideas.

@Spaceghost
Copy link

Thanks, @andreareginato. I really like this project, by the way.

@andreareginato
Copy link
Collaborator Author

Good to hear that. I want to make it a container full of good tips.
@Spaceghost, @myronmarston I would love your help on adding the vim keybinding if you like the idea.

@myronmarston
Copy link

I feel like spork is a smell for running tests in parallel. If all you want is to have a rails environment hot all the time, then you should really only be running integration and functional tests in it. Unit tests should not need the whole environment. That's just poor craftsmanship.

But I feel that now I'm more interested in being intentional about how I run tests instead of running them automatically. I feel like it gives me time to cognitively assess my tests and code instead of just running them continuously.

Well said :). This is pretty much how I feel, too.

@Spaceghost, @myronmarston I would love your help on adding the vim keybinding if you like the idea.

There's a vim plugin that provides something nearly identical to what I use. You can link to that.

One other tip that might be pertinent here:

http://myronmars.to/n/dev-blog/2012/03/faster-test-boot-times-with-bundler-standalone

(I don't know that I'd consider this a "best practice", though--it all depends on your environment. In a typical rails testing environment that loads the entire rails env, the added time of bundler at runtime is probably not noticeable).

@Spaceghost
Copy link

Hey @skalnik, you're github famous!

@rkh
Copy link

rkh commented Feb 15, 2013

I prefer rerun over guard or watchr, as it's zero configuration.

@andreareginato
Copy link
Collaborator Author

I've added the alternative solution that gives you direct control on the specific test and I've removed the "wrong" case. Any correction and comment to the updated guideline is appreciated.

@marnen
Copy link

marnen commented Jan 2, 2014

@Spaceghost:

But I feel that now I'm more interested in being intentional about how I run tests instead of running them automatically. I feel like it gives me time to cognitively assess my tests and code instead of just running them continuously.

And this is exactly the wrong thing to be doing, I think. If you pick and choose which tests you run, then you'll only run the ones you expect to fail, and so if you break something you don't expect to break, you won't immediately know. OTOH, if you just mechanically run all your tests continuously (or the way Guard does, proximate tests firsts and then all), then you will find out immediately if you broke something in another part of the codebase.

Put another way, if you try to be too smart about which tests you run, you'll introduce confirmation bias. You need to run the whole suite pretty frequently. Does it take time? Sure, but you can minimize that time by writing your tests properly. Anyway, it's essential. As I keep saying: fast tests are of no use if they're inaccurate.

@Spaceghost
Copy link

While I see your point, I find that not focusing on what I'm trying to get the code to express and just looking at the whole of everything broken introduces a kind of bias of its own. I'm focusing on telling a story in what I'm working on, and the breakage of tests that aren't within my focus are something that should be handled after I finish telling the story elsewhere. The information I gain from those other tests broken is minimal, and I run my whole test suite quite often when I've pulled my head outside of that small area.

I think that a short-term confirmation bias is valuable as well as detrimental. It makes it so that I don't change my design based on how many tests break, which effects what I'm willing to do. I get to code bravely with intent to improve rather than coding defensively. I'm all up for agreeing to disagree, but I just don't find value in automating test runs when you could just be running your tests focus first and then run tests near your test, and then the full suite.

@marnen
Copy link

marnen commented Jan 2, 2014

@Spaceghost:

Interesting points!

While I see your point, I find that not focusing on what I'm trying to get the code to express and just looking at the whole of everything broken introduces a kind of bias of its own. I'm focusing on telling a story in what I'm working on, and the breakage of tests that aren't within my focus are something that should be handled after I finish telling the story elsewhere.

I get your point. I find, though, that I normally like to see everything fail, even if I don't fix it immediately: this gives me an idea of what I will have to do to properly finish the new feature.

The information I gain from those other tests broken is minimal,

Only if you're not thinking enough about your application as a whole, IMHO.

I think that a short-term confirmation bias is valuable as well as detrimental. It makes it so that I don't change my design based on how many tests break, which effects what I'm willing to do.

Why would that be a good thing? If a lot of tests break, your code is telling you something (which may be that your design should be changed), and you should listen to it, not ignore it.

I get to code bravely with intent to improve rather than coding defensively.

So do I. Running all the tests frequently doesn't change this.

I'm all up for agreeing to disagree,

I'm not. I think that's lazy: most questions in software development have a single right answer (or similar small finite number), and it's up to us as responsible developers to find that answer. If we disagree, that probably means at least one of us is wrong, and we should figure out which of us it is.

but I just don't find value in automating test runs when you could just be running your tests focus first and then run tests near your test, and then the full suite.

That's what Guard does. I find value in the automation because I don't want to have to remember to run the tests every time. With Guard, the tests run every time I save a file, so I save a task every time I save.

@myronmarston
Copy link

I'm not. I think that's lazy: most questions in software development have a single right answer (or similar small finite number), and it's up to us as responsible developers to find that answer. If we disagree, that probably means at least one of us is wrong, and we should figure out which of us it is.

My experience with software engineering is the opposite: the longer I've done it, the more I realize that there isn't a single right answer to most things, just tradeoffs. There are tradeoffs involved with using a tool like guard. It isn't a universal best practice -- not because it's not a good tool (it's a great tool!), but because best practices are nearly always contextual: taking a particular route provides some benefits but also bears a cost. Whether or not the benefits outweigh the costs depends on the situation.

I don't think we should try to arrive at a list of binary good/bad guidelines. Instead, we should be aiming to do a better job of describing the tradeoffs so people can make informed decisions on their own, and understand why a majority of the community recommends a particular practice.

@Spaceghost
Copy link

I agree with @marnen about having single right answers, but mind that this project is about best practices, and I just can't agree that this is anything more than great tooling. It's hard to draw a line between things I believe in and things that are best practice. This is why I agree to disagree about your tooling preferences, but insist that this is not a best practice. I'll follow up with specific points soon, but I'm in the middle of ughlifecrap.

@marnen
Copy link

marnen commented Jan 2, 2014

@Spaceghost: I think single right answers apply to most software development issues, not just best practices. I also think automated test execution is a best practice (although of course any sufficiently powerful tool would work, not specifically Guard), and I'm not sure why you're insisting that it isn't.

@myronmarston:

My experience with software engineering is the opposite: the longer I've done it, the more I realize that there isn't a single right answer to most things, just tradeoffs.

I agree with you about many answers being complex. But I believe that once the use case has been identified—including context and tradeoffs, if any—a single right answer should emerge. I think you're postulating a single right answer over a broader domain than I meant, so this kind of becomes a straw man. (I'm not saying you meant it that way, just that I probably didn't make clear what I was driving at.)

I do think that people sometimes invoke the spectre of imaginary tradeoffs to justify bad decisions. (I probably do this as much as anyone else!)

I don't think we should try to arrive at a list of binary good/bad guidelines.

And yet that's precisely what this project is attempting to do.

Instead, we should be aiming to do a better job of describing the tradeoffs so people can make informed decisions on their own, and understand why a majority of the community recommends a particular practice.

No argument there. But I think those can usually be made into binary guidelines if the context is stated specifically enough. (Whether such specific context remains useful is a separate question. :) )

Anyway, we're drifting off topic here, for which I apologize. I'll admit that relativism in software development is one of my pet peeves. (I'm a composer as well, and I have no problem with relativism in art, but aesthetics is not fully subject to logic.)

@myronmarston
Copy link

I don't think we should try to arrive at a list of binary good/bad guidelines.

And yet that's precisely what this project is attempting to do.

Which is why I don't ever recommend this resource, and will continue not to until it is focused more on articulating tradeoffs and the "why" of particular guidelines.

I think you're postulating a single right answer over a broader domain than I meant, so this kind of becomes a straw man.

What I said applies to a broader domain, but it also applies here. I'm used guard in the past, but have found that I'm much more productive with a simple vim keybinding to run my specs. Gives me instant feedback on demand. I much prefer that to distracting feedback whenever I happen to save a file.

@marnen
Copy link

marnen commented Jan 2, 2014

@myronmarston:

Which is why I don't ever recommend this resource, and will continue not to until it is focused more on articulating tradeoffs and the "why" of particular guidelines.

Yeah, that's a good point. I do think some of the guidelines are too simplistic. OTOH, I like having stupid-simple guidelines that I can share with colleagues who just aren't getting it. They can worry about the subtleties later. :)

I'm used guard in the past, but have found that I'm much more productive with a simple vim keybinding to run my specs.

My keybinding for Guard is Cmd-S. It happens to save my current file too. :)

Gives me instant feedback on demand. I much prefer that to distracting feedback whenever I happen to save a file.

Interesting. I don't find Guard notifications distracting. I'm not sure I could find them distracting: they're a constant, necessary indicator of project health. If I could feasibly have Guard run every time I hit Return in the editor, I probably would.

@myronmarston
Copy link

My keybinding for Guard is Cmd-S. It happens to save my current file too. :)

There are many times I save code files when I don't want want any specs run. I save my files constantly (even with code temporarily, intentionally broken), and I don't want the specs to run just because I saved the file.

I don't find Guard notifications distracting. I'm not sure I could find them distracting: they're a constant, necessary indicator of project health. If I could feasibly have Guard run every time I hit Return in the editor, I probably would.

I think the fact that I find the feedback distracting has more to do with how my brain works and the kind of workflow I use than guard as a tool.

@marnen
Copy link

marnen commented Jan 3, 2014

@myronmarston:

I save my files constantly (even with code temporarily, intentionally broken), and I don't want the specs to run just because I saved the file.

Hmm. I save my files constantly too. I could imagine that if I had a keybinding to say "save and don't run tests", I might use it—but only occasionally. I'm willing to put up with an occasional excess failing test notification for the sake of the security blanket that Guard represents for me. YMMV; I suspect this is one of those legitimate matters of taste. :)

@Spaceghost
Copy link

I just think you have a tool you really like, that I used to like, and that it works well for you. I think that we're all having trouble separating best practices for everyone from our own personal preferences. I think that this particular issue is fraught with preferential bias either direction. Does anyone want to offer up any kind of scientific analysis?

Guard is a really cool tool, but I think the real best practice is running tests often and running the right tests often. Your tooling is just a preference on how to arrive at that.

@andreareginato Can we reword this to be more focused on the intent and not the implementation? Something about constant feedback by running tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants