TDD study session (2012 May 03)

sandal edited this page May 11, 2012 · 39 revisions

On 2012 May 3 Gregory Brown and Eric Hodel ran a study session on test-driven development. This page includes a summary of the findings from that meeting, as well as the full meeting transcript.

Meeting Summary

The discussion centered around the challenges involved in testing Blind, a positional audio game built by Gregory for a Practicing Ruby article. While working on that game, Gregory stumbled over a bunch of TDD-related problems, and so this study session was set up to explore those challenges.


A good deal of time was spent discussing testing code which depends on randomization. Blind relies on randomization to place game elements (mines, exits, etc.) at arbitrary positions within the game world, and this presented some challenges in testing. One problem that was due to some floating point rounding errors, the tests which verified that a player was within detonation range of a mine, or that a mine was in the correct region of the world would end up failing. Eric (others) suggested a few things to deal with this problem:

  1. For tests that only implicitly relied on the randomization of the game elements, setting a fixed seed for the randomizer would at least guarantee consistent results across test runs. In Ruby 1.9, it is possible to make instances of Random, so it would not be necessary to set the seed globally. Instead, an instance of Random could be injected as in this example.

  2. If testing the validity of the randomization is necessary, then manually choosing a fixed point and then verifying that the arbitrary point generated by Point.random wasn't equal to that point would be sufficient. While it is mathematically possible for this kind of test to fail, the chances are astronomically small with arbitrary floating point values.

  3. To deal with the root problem uncovered by the tests, some sort of fuzz factor would need to introduced. For example, some value epsilon could be established and then checks such as the following could be used:

(distance - epsilon .. distance + epsilon).include?(detonation_distance)

We also discussed whether the intermittent test failures and far reaching effects of rounding errors might be a sign of a design problem (both with the code and tests), but didn't come to any clear conclusions.

Callback testing

Blind uses a simple callback based system for handling game events, and to test this system, Gregory wanted to verify that the right callbacks were being called at the right time. The following example shows how he approached the problem:

  it "must trigger an event when deep space is reached" do
    dead = false

    game.on_event(:enter_region, :deep_space) { dead = true }

    refute dead, "should not be dead before the outer rim is reached"


    assert dead, "should be dead once the outer rim is reached"

We thought about this problem for a while, but both Eric and the others who were participating in the meeting said that this is probably how they would have written their tests as well. We talked about the possibility of introducing some testing framework level support for this kind of problem, and managed to dig up an old RSpec issue in the process. That issue was never resolved, and the solutions discussed sounded pretty messy.

We came up with a couple of our own similarly messy ideas, such as the following:

  # this approach is possible but very brittle as it relies on 
  # storing yield status in instance variables
  it "must trigger an event when deep space is reached" do
    game.on_event(:enter_region, :deep_space, &must_yield)

    refute_yielded, "should not be dead before the outer rim is reached"


    assert_yielded, "should be dead once the outer rim is reached"
  # this is somewhat cleaner, but still results in slightly hard to read code.
  # so it's unclear whether the cure is worse than the disease.

  it "must trigger an event when deep space is reached" do
    callback = Mock.callback

    game.on_event(:enter_region, :deep_space, &callback)

    refute callback.called?, "should not be dead before the outer rim is reached"


    assert callback.called?, "should be dead once the outer rim is reached"

With the best alternatives we could come up with still not feeling satisfying, we decided to accept this as a wart and move on. But we'd definitely be interested in hearing what others think about this problem!

Acceptance testing through the UI

In Blind, while some attempts were made to isolate the UI code from the underlying game engine, the interface between them was still somewhat brittle. Whenever a major change was made to the game engine, even small changes to its surface would cause problems in the front-end. These issues were tricky to track down, because they typically would occur only when particular game events were triggered. To deal with this problem, Gregory built a small tool to simulate user input and play through an entire game. We discussed the pros and cons of this approach, as well as the general problem of testing in complex environments. In particular, we came up with the following thoughts:

  1. The difficulty of UI testing is mostly dependent on what tools are available at the framework level (or provided by third party libraries). In modern development, things like capybara abstract away a lot of the details of the delivery mechanism to make acceptance testing through the UI easy, which makes writing tests around the paths through your application very profitable. However, in an experimental and immature framework like Ray, it is necessary to learn a lot of low level details of the framework in order to be able to write meaningful UI tests, and this drives the cost up. This isn't a criticism of the framework but rather something to be aware of as an application developer: if adequate testing tools don't exist for your framework of choice, you may need to build and maintain them yourself!

  2. Because testing code through the UI is often hard unless there is really good framework support for it, clean separation between UI code and underlying business logic are that much more important. With some refactorings, much of the problems found in the Blind game could have tested at a lower level with less effort.

  3. One of the main things that motivates black-box UI testing is that it can be hard to know what information you need in order to debug the various problems that can arise. So by testing from the outermost layer, it is possible to at least detect when a problem occurs, even if you have not yet discovered why the problem has happened. Good debugging output and event logging can really help make acceptance tests (as well as manual testing) more fruitful, and can provide the information that is needed to write more focused tests at a lower level. Without that information, the results of a failed UI test run can be very opaque.

This topic is one that we could have probably discussed further, but Eric's experience with this particular problem was somewhat limited. This could be a good topic for future study, so please let us know if you have useful resources or suggestions!

Magic numbers + brittle tests

The use of hard-coded values (AKA magic numbers) throughout Blind's tests made them somewhat brittle and opaque. Eric reviewed a few different test cases and shared his thoughts on how to improve them.

The first group of magic numbers we discussed were somewhat innocuous ones. These were precomputed values to test simple, well known arguments. For example, the following tests were used to verify that the distance between two points could be computed:

  it "must compute distance between itself and another point" do
    point_a =, 3)
    point_b =, 7)


We discussed how to improve these tests, and ended up settling on the idea that a combination of simple documentation and easier to understand hardcoded values would make them much better:

  it "must compute distance between itself and another point" do
    # use a 3-4-5 triangle for simplicity
    point_a =, 0)
    point_b =, 4)

    # compute two-dimensional Euclidean distance 
    # DETAILS:

Gregory pointed out that the reason he used more messy looking points (even though they also used a 3-4-5 triangle), was to make it more likely to catch subtle failures in the underlying implementation. But Eric convincingly argued that when it comes to simple problems like computing Euclidean distance, it is unlikely that things would work incorrectly, and that it would be better to add additional tests later rather than start with messy tests. Both agreed that if the computation under test was more complicated, using messy data might be beneficial, but that in this case it was overkill and not especially helpful.

Some other uses of hard-coded values in Blind were not so straightforward. When the procedures under test are not so universally understood, arbitrary hard-coded values look far more ugly. Gregory was especially unsatisfied with his tests for the Blind::World object. You should check out the full test file to see what a mess it is, but the code below should give you a rough sense of the story at a glance:

  it "must be able to look up regions by position" do

  it "must have an exit position in the minefield" do
    exit_position = world.positions.first(:exit)

    distance = world.center_point.distance(exit_position)

  it "must be able to determine the current position" do

  it "must locate points in the safe zone" do
    [[0,0], [19,0], [0,19.999]].each do |point|
      assert_region :safe_zone, point

To clean these tests up, Gregory suggested the possibility of creating named fixture data (i.e. things like fixtures[:safe_zone_points]) but talked himself out of it because it seemed to make the tests too abstract. Eric suggested once more that the use of hard coded values was not necessarily the problem, but that the fact these tests seemed to be operating at many layers at once made it harder to keep track of what the various fixed points actually meant. He suggested that breaking up the tests so that they were more isolated in their area of focus. For example, this set of world tests could be broken into two parts, one focused on the structure of the world, and the other based on the semantics of the world.

The structural tests would cover straightforward details such as the fact that the world is broken up into sequential non-overlapping regions. The semantic tests would verify more game-specific details, which would include among other things what particular regions should be in the world, and what game elements should be found within those regions. While this change was not simple enough to come up with a code example for during the meeting, it is possible to imagine how this would affect the organization of the tests, and possibly make them easier to understand and work with even if they still used hard-coded values. At the very least, it would make it so that the structural tests for the world would not fail whenever a change to the game-specific details were made.

The basic realization that we settled on is that at the core of any project, some tests are bound to be a bit brittle because decoupling them from the implementation entirely would cause the tests themselves to no longer be meaningful. The key to dealing with that problem is not necessarily to eliminate the use of hard-coded values, but to try to isolate different concerns as much as possible so that the brittle portion of your test suite stays well contained over time.

Speeding up the refactoring feedback loop

We wrapped up the discussion by talking about the challenges involved in making large refactorings while maintaining good test coverage. In particular, we agreed on the idea that spending too much time refactoring under the red light was just as bad or worse than refactoring with no test coverage at all. Gregory showed an example of a single refactoring that was a nightmare to pull off because of all the different changes it required. Eric pointed out that there were at least two changes being attempted at once in this code, and that was part of the problem.

Gregory pointed out that in later refactorings on Blind, he didn't directly attack the change he wanted to make, but instead decided to make small refactorings in the general area around where the major refactoring needed to be done. His experience was that by smoothing out the general area with various small changes before making any major cuts, the total time through the feedback loop went down. Eric agreed that this was the right approach to take, and shared a few bits of advice to help make this kind of workflow easier.

In particular, Eric recommended introducing new APIs before removing the old ones, and then gradually converting calling code to use the new APIs. Even if this leads to some messiness in the short term, it reduces the likelihood that you'll see a huge and disorienting failure cascade, which contributes to making changes one small step at a time. Eric pointed out with good test coverage, calls to the old API could finally be removed by placing them with dummy methods which simply raise an error. This is something that could also be accomplished through project-wide search tools like ack, but the error-raising approach lets the code tell you where the problems are.

In addition to these suggestions, Eric also pointed out a recent pairing experience which reminded him of how he likes to work in small steps, as well as a great talk on TDD by Gregory Moeck which helps explain how to build nicely layered systems that may be easier to work with and refactor.

Meeting Transcript

seacreature: Hi everyone, we'll get started in a just a couple minutes 05:32 PM

seacreature: We also have some big news: We are starting the transition to making Mendicant a fully public community / school: 05:33 PM

seacreature: drbrain: I'll start hitting you with questions in like two minutes, putting in a food order right now so I don't forget to eat dinner :) 05:35 PM

drbrain: seacreature: awesome 05:36 PM

seacreature: For everyone else, what we're about to do is have drbrain review some of my code and questions about test-driven development. 05:36 PM

seacreature: I am currently doing a 90 day self-study in TDD and as part of that I promised to try to fill in my own holes in understanding where possible 05:36 PM

seacreature: the task of working on a simple little game lately proved to be very annoying for me testing wise, and so that's what we'll be talking about today 05:37 PM

seacreature: if you're not already following the story, please skim this document, particularly the game description: 05:37 PM

seacreature: we will try to keep to general testing concepts, but the background won't hurt 05:38 PM

seacreature: These are the questions we'll try to cover today, at least some of them 05:42 PM

seacreature: 05:42 PM

seacreature: Folks who are hear listening in, please say hi just so we know roughly who is around 05:42 PM

revans: hello 05:42 PM

ptn777: howdy 05:42 PM

chastell: hi, everyone :) 05:43 PM

rafadc: hi! 05:44 PM

seacreature: drbrain: do you mind if I ask you these questions out of order? 05:44 PM

drbrain: not at all 05:44 PM

Sou|cutter: I gave a presentation on my xml parsing gem at the chicagoruby group this past Tuesday 05:45 PM

seacreature: Sou|cutter: awesome! 05:46 PM

Sou|cutter: 05:46 PM

seacreature: you'll have to tell me more about it later, we're about to start an event here :) 05:46 PM

seacreature: drbrain: I upated the gist to add question numbers to make it easier to know what we're referring to: 05:46 PM

Sou|cutter: it's nothing big, but my first general public gem 05:46 PM

seacreature: I think that I would like to start with the questions around randomization (question 5 and 6) 05:47 PM

drbrain: ok 05:47 PM

seacreature: I have had this question come up before in training sessions before, and always had kind of a hand-wavey answwer 05:47 PM

seacreature: in that context, we were talking about how to test a shuffle routine for a deck of cards object 05:48 PM

seacreature: in the context of this game, I am using randomization to place mines in a minefield and also to randomly place an exit location 05:48 PM

seacreature: The first question is whether randomization should be considered an implementation detail, or whether it should be specified in the tests in some way 05:49 PM

seacreature: folks, look for question 5 in that gist and you'll see the code I'm referencing 05:49 PM

seacreature: it is basically something like Point.random(d1...d2) which generates a point in a given distance range away from the (0,0) point 05:50 PM

drbrain: when I deal with randomness, I think this is a perfectly valid approach: 05:51 PM

drbrain: since we're testing we're under controlled conditions so randomness is an enemy of easy test writing 05:51 PM

Sou|cutter: Can't you just use the same seed in tests to make the numbers predictable? Not very black-box, though... 05:52 PM

drbrain: there's multiple pieces of this test that we want to verify for correctness 05:52 PM

drbrain: Sou|cutter: yes... 05:53 PM

Sou|cutter: sorry if I'm getting ahead of things. I don't really know if that's the right thing to do either tbh 05:53 PM

drbrain: so by using srand before testing the method we can get the same result out every time to test that the point is at the correct distance 05:53 PM

drbrain: to test the randomness side we could create a random point with an arbitrary seed then assert that it is different from one generated with our fixed seed 05:54 PM

drbrain: that way we test that both the non-random portion of the method works along with the random portion 05:55 PM

seacreature: The idea of using the same seed would work for me, and is probably better than what I did to counter some problems related to that in some areas of my code (I serialized objects after generating them to ensure they stayed the same) 05:55 PM

drbrain: something like: arbitrary = Point.random; srand 5; fixed = Point.random; ... 05:55 PM

seacreature: Sometimes I feel uncomfortable with that because it's not mathematically certain, even if it's nearly impossible to generate the same value with a wide open randomization 05:56 PM

drbrain: also, in ruby 1.9 you can create instances of Random to prevent other users of #rand from messing up your sequence 05:56 PM

seacreature: Maybe I'm just being too obsessive there 05:56 PM

seacreature: ooh, interestin 05:56 PM

rafadc: Do you prefer us to interrupt you to ask questions or do we wait until the end? 05:57 PM

drbrain: I'm fine with answering a few questions intermixed, but I want seacreature to keep us on track 05:57 PM

seacreature: rafadc: we are figuring that out as we go, how about if we allow interjections, but if we find it goes off track too much we'll adopt a format where I talk, then drbrain, then everyone else on each question? 05:57 PM

seacreature: i.e. as long as it's not too chaotic I'm happy to have an organic discussion 05:58 PM Sou|cutter: hands seacreature a gavel 05:58 PM

rafadc: ok, then aren't we testing the random number generator with that approach? 05:59 PM

seacreature: okay... so if you were to use an instance of random, how would you integrate that 05:59 PM

rafadc: I mean: we could create a random point with an arbitrary seed then assert that it is different from one generated with our fixed seed 05:59 PM

rafadc: aren't we testing that the Random class is working? 05:59 PM

seacreature: I should explain more why this question matters to me 05:59 PM

seacreature: one of the problems I was having was with floating point errors when computing distances between points 06:00 PM

drbrain: rafadc: an important property of this method is that it also returns different points for different calls 06:00 PM

drbrain: rafadc: so I think it's a valid test to include 06:00 PM

seacreature: so I would generate random points within a distance range 06:00 PM

drbrain: so long as we test that arbitrary != fixed I don't think it's a problem 06:01 PM

seacreature: but sometimes they would have rounding errors which caused them to end up not within that range 06:01 PM

seacreature: if I had set a random seed, it would have helped with that because my test results would have been consistent 06:01 PM

seacreature: maybe we could have done something like this 06:02 PM

drbrain: whenever I use floats I also use assert_in_delta (must_be_in_delta for minitest/spec) 06:02 PM

seacreature: def Point.random(range, 06:02 PM

drbrain: seacreature: yeah, dependency injection 06:03 PM

seacreature: drbrain: yeah, I was thinking about doing that but what I actually was having was somewhat of a failure cascade 06:03 PM

seacreature: so for example, I would generate a mine and move the player within distance of exploding it 06:03 PM

seacreature: but the distance function had some error 06:03 PM

seacreature: so occasionally it'd give false positives and negatives 06:03 PM

seacreature: or, before I introduced rejection sampling, when I generated points they would sometimes be outside of the range, even though I used trig functions that would guarantee they were within range if the precision was infinite 06:04 PM

seacreature: so the tests were more of the form of "Did this event get triggered" 06:04 PM

seacreature: not "is 8.000 in delta 0.001 of 7.99997" 06:04 PM

seacreature: I got really fed up with all of this, I almost wanted to go back to rectangular forms and not use floats at all 06:05 PM

seacreature: 06:06 PM

Sou|cutter: so you may never have seen those problems if you had used a seeded random? (depending on what values are generated?) 06:06 PM

seacreature: lines 15-19 feel like they should not need to exist in this code 06:06 PM

seacreature: Sou|cutter: that's correct... the tests seemed to pass 95% of the time 06:06 PM

drbrain: I played around with a space-combat simulator for a while and I didn't come up with a good solution for float imprecision when comparing polar-generated coordinates vs cartesian coordinates 06:07 PM

Sou|cutter: interesting.. 06:07 PM

drbrain: fortunately my simulator was grid based so you couldn't stand in arbitrary positions 06:08 PM

seacreature: One way to possibly solve it would be effectively to define my region borders with a delta built into them 06:08 PM

seacreature: 06:08 PM

seacreature: the issue is mostly that borders are zero-width. 06:10 PM

rubysolo: had to do something similar for point-in-polygon detection for a google maps app. basically built in a fuzz factor for if the point was arbitrarily close to the border. 06:10 PM

drbrain: yes, adding an epsilon for computation much like assert_in_delta uses 06:10 PM

drbrain: (well, assert_in_epsilon) 06:10 PM

seacreature: okay, I think that this is a satisfying answer, if tedious 06:10 PM

seacreature: use a random seed to make your tests deterministic in general 06:11 PM

rafadc: can we just stub ou random number generator? 06:11 PM

chastell: (a non-random seed to make them deterministic…) 06:11 PM

seacreature: bake in some sort of fuzz factor into the border checks 06:11 PM

rafadc: the seed looks like a magic number to me 06:12 PM

seacreature: chastell: yeah sorry :) 06:12 PM

seacreature: rafadc: the benefit of the seed is that it gives you predictable randomization 06:12 PM

seacreature: so you don't have to totally hard code the values 06:12 PM

seacreature: they just are guaranteed to come out in the same order 06:12 PM

chastell: I’d probably toy with dependency injection of the randomiser, but you said you tried that and it went awry 06:13 PM

seacreature: chastell: I didn't try that, actually 06:13 PM

drbrain: I think stubbing #rand is equally magic to srand 5, you'll have the same results of a fixed test order 06:13 PM

drbrain: rather, fixed set of random numbers 06:13 PM

rafadc: yes, but it's the same concept as hardcoding the values, excepting that you don't say explicitly which values the randomiser is going to return 06:14 PM

rafadc: which i find a bit more difficult to read 06:14 PM

drbrain: since ruby gives us srand I prefer it since it requires less implementation 06:14 PM

seacreature: I did not know about Ruby 1.9 allowing the creation of the instances 06:14 PM

seacreature: of Random 06:14 PM

rafadc: if I go to a test that was not written by me 06:14 PM

seacreature: drbrain: I'm fearful of applying srand globally because it may affect dependencies 06:14 PM

drbrain: rafadc: if you have to depend on particular values from rand I think your tests are too coupled 06:15 PM

drbrain: you shouldn't have the equivalent of assert_equal 6, rand(10) 06:16 PM

drbrain: when testing randomness you should test that different things happen, not that exactly "this" happens 06:17 PM

chastell: seacreature: well, any given srand should be as good as any other, so the dependencies shouldn’t be affected, but I can relate to the bad feeling of ‘toying with reality’ 06:17 PM

rafadc: uoch, touché 06:17 PM

seacreature: 06:17 PM

seacreature: I think I'm satisfied with this 06:17 PM

Sou|cutter: there's no way to re-set the seed to a value that would restore the global random instance to its previous state either, I would think... 06:18 PM

chastell: angle should also call randomizer.rand 06:18 PM

chastell: (I think) 06:18 PM

seacreature: it isn't too weird for a random() method to accept a randomizer 06:18 PM

seacreature: yeah there are bugs, it's untested, fixing :) 06:18 PM

seacreature: so let's recap what we came up with here and then move on, we could give a whole discussion on randomization, but it's mostly an edge case :) 06:19 PM

seacreature: 1) seed when the randomization is an implementation detail 06:19 PM

seacreature: 2) if you have computations that need to be within certain bounds, put a fuzz factor on those bounds if generating floating point values 06:20 PM

seacreature: 3) to test the randomization itself, verify that it is possible to generate a value that is not equal to a fixed point 06:20 PM

seacreature: I think #3 might feel a little bit too much like testing the randomizer 06:21 PM

seacreature: but seems to be good advice IF that matters 06:21 PM

seacreature: drbrain, et. al: does that summarize what we've come up with so far? 06:21 PM

drbrain: seacreature: yes 06:21 PM

seacreature: There is also the real question for this particular context of whether or not allowing arbitrary positioning is a good design decision 06:22 PM

drbrain: for 3, the test should be as simple as "do I get different results", and only when that matters 06:22 PM

seacreature: life would be MUCH easier if the game objects were snapped to an integer-based grid 06:22 PM

seacreature: okay... so maybe we can move on now :- 06:23 PM

seacreature: :-) 06:23 PM

chastell: seacreature: would it make sense to just make the grid tiny enough (like, a ‘move’ is 1000 units)? 06:23 PM

chastell: sorry, yeah, let’s move on :) 06:23 PM

seacreature: interesting idea. 06:24 PM

seacreature: maybe. 06:24 PM

seacreature: alright drbrain, I think we should move away from the numbers for a moment and go to question 1 06:24 PM

seacreature: my game uses a simple publish/subscribe callback system for events 06:24 PM

seacreature: 06:25 PM

seacreature: but I kind of feel the way I test them are a bit ham-fisted 06:25 PM

seacreature: (first code example in the gist) 06:25 PM

seacreature: thoughts? 06:25 PM

drbrain: To be honest, I haven't thought of a better way of testing callbacks other than using closures 06:26 PM

seacreature: In the past I have done slightly weird things 06:26 PM

drbrain: you're right, it's pretty clumsy to test, but I haven't yet found a less-annoying way 06:26 PM

seacreature: like try to implement something with to_proc 06:26 PM

seacreature: but that makes it feel even more artificial 06:26 PM

drbrain: yeah 06:27 PM

seacreature: it is almost like I wish that testing frameworks would provide a dummy mock for this 06:27 PM

seacreature: foo { assert_called } 06:27 PM

chastell: would creating an actual subscriber and make it expect things to be called make this more readable (at a cost of much more work)? 06:27 PM

drbrain: well, it would need to be more like foo(&Mock.assert_called) 06:28 PM

seacreature: drbrain: right, because otherwise Ruby may never actually see it :) 06:29 PM

drbrain: but I dislike ^^ more than a closure 06:29 PM

seacreature: foo { omg_fake_stuff } will happily fail silently 06:29 PM

seacreature: chastell: I think maybe so, but I worry about that sort of thing 06:29 PM

seacreature: I kind of dislike building elaborate artificial scenarios in my tests 06:30 PM

drbrain: ditto, the closure is easy to reason about even if it feels verbose 06:30 PM

seacreature: but this may be six of one and half a dozen of the other: they have pros and cons that more or less wash each other

out 06:31 PM

seacreature: does anyone else have thoughts on how to test callbacks? 06:31 PM

drbrain: if you use the callbacks frequently it would be more worth it 06:31 PM

Sou|cutter: not really an answer, but I think this is an interesting discussion about rspec and testing yielding behavior 06:32 PM

drbrain: but usually I just test the callback once 06:32 PM

Sou|cutter: opened 2 years ago.. still no answer ;) 06:33 PM

drbrain: since callbacks are delayed in time I'm not sure if that syntax would help 06:33 PM

drbrain: rather, matcher 06:33 PM

Sou|cutter: yeah, true 06:33 PM

Sou|cutter: I actually think the way this is tested in the gist is ok 06:34 PM

seacreature: So... our conclusion on this one is if dave hasn't solved it yet, we probably are doing the best we can here? :-P 06:35 PM

drbrain: the bottom of this comment has roughly the same conclusion: 06:35 PM

seacreature: okay, so let's table this one for now 06:36 PM

seacreature: I plan to write up our conclusions from this meeting and openly invite folks to correct us where we're wrong, so if we're lucky someone might catch up with us later and share their thoughts 06:37 PM

seacreature: Next question! 06:37 PM

seacreature: I didn't realize when asking drbrain to do this review that he's done some work w. OpenGL 06:38 PM

seacreature: that makes it much easier to ask questions about this stuff! 06:38 PM

seacreature: What I found with my game is that there is a lot of non-trivial but somewhat inconsequential stuff going on in the UI 06:38 PM

seacreature: due to some design flaws, there is also a fair bit of semi-game-specific logic up there too, but we can ignore it for the moment 06:39 PM

seacreature: the problem I found was that when I made changes to the underlying structure, I had a hard time doing so without introducing bugs at the frontend layer 06:39 PM

seacreature: typically these were just method renames or signature changes 06:39 PM

seacreature: but they were only exposable by playing the game and reaching certain conditions in the game 06:39 PM

seacreature: eventually I got fed up and wrote some very brittle but functional UI testing tools 06:40 PM

seacreature: we can get to those, but I'm curious whether you had similar issues in the game oyu were working on drbrain 06:40 PM

seacreature: and if so, how did you work through them? 06:41 PM

drbrain: since my game was two AI players fighting each other I didn't have much UI 06:42 PM

drbrain: you launched the game and the ships moved until one blew up 06:42 PM

seacreature: i see, but say you introduced a new game element or something like that 06:43 PM

seacreature: how would you make sure that you didn't break something in the process? 06:43 PM

drbrain: since it was an exploratory project, though, I wrote tests for nearly everything 06:43 PM

drbrain: one huge problem I had was in the initial design of the game engine 06:44 PM

seacreature: hah, you have the opposite habit of mine :) 06:44 PM

seacreature: for me, s/nearly everything/almost nothing/ 06:44 PM

drbrain: when you ran the game the ships would spin around funny and move backwards, then appear at a different location 06:44 PM

drbrain: yes! since it was new to me there were too many details to keep track of so I wrote reminders as tests 06:45 PM

drbrain: but I would also cut out parts of the game and spike them in isolation, then backport the fix into a test 06:46 PM

seacreature: that is a great strategy, one I use from time to time but should do so more often 06:46 PM

drbrain: anyhow, the root of my ship movement problem was in how I was tracking the state of the ship throughout a set of actions it had decided to perform 06:47 PM

drbrain: I had to alter most of the code and change how a ship's actions were played out to prevent them from being in the wrong order 06:48 PM

drbrain: I also cleanly separated the UI of the game from the inner workings of the game 06:48 PM

drbrain: so I could say "when I put a ship here and an enemy there, the actions should be these" 06:49 PM

seacreature: I tried to do this as well, and I'd say in some aspects I did a good job of that, but in other aspects there are still some things that need work 06:49 PM

seacreature: The trouble were things like when I renamed the game events it worked with 06:49 PM

seacreature: or moved from a rectangular map to a circular one 06:50 PM

seacreature: because even though the details of those things could be pushed down into the game engine, the interfaces (and the semantics) of my objects changed 06:50 PM

drbrain: it still took me a few days to figure out my sequencing issues though 06:51 PM

drbrain: while my test coverage was fantastic, it took me a long time to grasp what was happening when the UI ran it 06:51 PM

seacreature: so, the reason I'm interested in this question is mostly because Practicing Ruby readers asked me to look into the kind of architecture that Uncle Bob talked about in a presentation at Ruby Midwest 06:52 PM

drbrain: so, I didn't have an interfacing problem 06:52 PM

drbrain: at least, software interfacing 06:52 PM

seacreature: 06:52 PM

drbrain: but I did have a representation problem where the pictures on the screen didn't give me enough data on the current state 06:52 PM

seacreature: drbrain: I feel like when you are manually testing the UI that can always be a problem 06:53 PM

seacreature: you sort of don't know how much debugging information is right until you know what can go wrong 06:53 PM

seacreature: now that I think about it, it might have been neat to throw an exception handler into my game that'd fire up pry 06:54 PM

seacreature: in my case it was usually exceptions 06:54 PM

seacreature: in your case, it sounds like it wasn't so easy to nail down 06:54 PM

drbrain: I think that was largely because the idea of how I thought it should work when I started didn't match what I wrote 06:56 PM

drbrain: there was also some problems of synchronizing the moves of each side 06:56 PM

drbrain: … related problems 06:57 PM

seacreature: I feel like there has to be some related analogy to doing UI level testing in say, rails applications 06:57 PM

seacreature: the problem that I found with my game is that I spent a lot of time building and tuning this little thing: 06:58 PM

seacreature: it's code is terrible, but what it does is simulate player key presses 06:58 PM

drbrain: I never got to the point of a UI that the user could use to set up a battle 06:59 PM

seacreature: It completely solved the problem I had, which was giving me coverage at the UI level of all the happy paths through the game, so that I could detect interface problems 06:59 PM

seacreature: but the problem with it is that it was full of all sort of exciting little rat nests 06:59 PM

drbrain: OpenGL is event-based, so it would behave similar to your simulator 06:59 PM

seacreature: like if you accidentally introduced some change that made it so it didn't release the right keys at the right time, it would just hang 06:59 PM

seacreature: I'm pretty sure Ray is just using OpenGL under the hood 07:00 PM

seacreature: the problem with this is the same problem I feel when I've had to test UI stuff in rails (something I'm very much not experienced at) 07:00 PM

drbrain: but I also broke each event handler into a separate method I could test individually 07:00 PM

seacreature: I felt like I was spending a ton of time figuring out the inner workings of the delivery mechanism rather than focusing on testing my app 07:01 PM

seacreature: maybe this is a sign of tool immaturity 07:01 PM

seacreature: for example, maybe Ray as a framework could have more tools like this built in? 07:01 PM

drbrain: it's been three years since I worked on a rails app that had UI now, so I don't have many current opinions on it 07:01 PM

seacreature: anyone have opinions? 07:01 PM

seacreature: I am basically in the same boat 07:01 PM

seacreature: I know capybara made life a lot easier, at least :) 07:02 PM

drbrain: I felt the same way about the OpenGL drawing 07:02 PM

drbrain: the answer to "how do I get this triangle lit properly?" was usually "experiment for a long time" 07:03 PM

seacreature: this isn't meant to blame Ray though, the version I have installed is 0.2.0 07:03 PM

seacreature: it's amazing for such an early stage tool 07:04 PM

seacreature: it just might mean that when using an immature framework, things like UI testing are going to be naturally harder 07:04 PM

Sou|cutter: Are these two separate issues? Simulating input events vs verifying UI things? 07:04 PM

seacreature: so you probably have even more of an incentive in that situation to decouple UI from your game engine as much as possible 07:04 PM

seacreature: Sou|cutter: drbrain and I were kind of talking on two separate threads, yes 07:05 PM

seacreature: but verifying UI things is to some extent dependent on simulating input events if the UI is interactive 07:05 PM

drbrain: I think they're pretty closely related 07:05 PM

seacreature: in the case of my game, I was just interested in path testing through the game... here are the acceptance tests for it: 07:06 PM

drbrain: seacreature's problem seems to be one of input while mine was one of output 07:06 PM

seacreature: 07:06 PM

Sou|cutter: nods 07:06 PM

seacreature: drbrain: actually in my case input was just a precondition for getting my sequence to play out. 07:06 PM

seacreature: for example, the thing I got stuck on badly was the introduction of levels 07:07 PM

seacreature: before I introduced levels, it was one round and then the game just ended 07:07 PM

seacreature: I added some pretty simple changes that introduced some intermediate events before ending in pretty much the same way 07:07 PM

seacreature: when I manually tested, it worked fine 07:07 PM

seacreature: but it turned out that when I ran my acceptance tests, I never bothered to properly release my keys when I got within range of an exit 07:07 PM

seacreature: because before it didn't matter, the game would have just ended 07:08 PM

drbrain: hehe 07:08 PM

seacreature: but with my simulator still pressing those keys and starting a new level 07:08 PM

seacreature: the whole assumption of where the player was going was broken 07:08 PM

seacreature: but it was really hard to track down due to limited feedback from the UI 07:08 PM

seacreature: it's like "surprise, it hangs" 07:08 PM

seacreature: and after a while it's like "Surprise, it's nowhere near where it was supposed to be" 07:09 PM

seacreature: how much more time do you have drbrain? 07:09 PM

drbrain: two hours 07:09 PM

seacreature: I realized we had a start time but not an end time for this meeting :) 07:09 PM

seacreature: Ahahaha, I don't think I will be able to hang on that long 07:09 PM

seacreature: but we can go for maybe another 30-40 mins 07:09 PM

drbrain: well, that's when I have to get on the bus 07:10 PM

drbrain: but I have IRC on the bus too :D 07:10 PM

seacreature: I'm pretty sure that Jia would kill me if I didn't give her a break from the baby soon enough :) 07:10 PM

seacreature: so let's recap on this question 07:11 PM

seacreature: 1) UI testing is going to be more or less of a challenge depending on how mature the framework testing tools are (or third party librarys are) 07:11 PM

seacreature: 2) UI testing can be complicated, so favor decoupling app code from UI code as much as possible 07:12 PM

drbrain: agreed 07:12 PM

seacreature: 3) Keep in mind that you probably need more feedback than you think you'll need about what is going on in the UI 07:12 PM

seacreature: Sound about right? 07:12 PM

drbrain: yes 07:13 PM

seacreature: anyone else have comments before we move on to another question? 07:13 PM

drbrain: I'd like to add that in general, I like to have a clear separation of the "dirty" input from the user and the "clean" input my code handles 07:15 PM

seacreature: can you give a more specific example of that? 07:15 PM

chastell: would there be any benefit of a simple text/debug UI that could be used in ‘what did just happen‽’ scenarios and by the testing layer to compare outcomes? 07:15 PM

drbrain: for an executable, I use an option parser to convert user arguments into useful ruby types 07:16 PM

chastell: (as usual, probably a lot of work for a small project, but maybe worth it for a larger one) 07:16 PM

seacreature: chastell: what I ended up doing is making a debug output for the UI, even though my game was audio only, running it with ruby -d would display text about various aspects of the game 07:17 PM

seacreature: but what I probably should have done for the automated testing, now that I think of it, is to introduce a logger into the simulator 07:17 PM

chastell: right, but was this UI any help in testing? 07:17 PM

seacreature: something that'd track what events were happening and snapshot the important details of the world between events, similar to the UI debug mode 07:18 PM

drbrain: I used logging for my space game extensively while figuring out what was wrong with my design 07:18 PM

seacreature: extremely helpful in manual testing, useless in my acceptance tests which really liked to freeze in loops :) 07:18 PM

seacreature: I think that was a big part of what was missing from the picture for me 07:19 PM

seacreature: even those floating point errors, it sure would have helped to know what mine I collided into, and where the game thought my player was 07:19 PM

drbrain: a big problem I had is "what question do I need to ask to figure out what is wrong?" 07:19 PM

seacreature: that's not a boring or sexy answer, but it would have probably been the most helpful thing overall in making the game easier to debug 07:20 PM

seacreature: It's something I finally realized in Newman, after encountering similar variability in what could go wrong 07:20 PM

drbrain: there was no obvious reason for my wrong results so I needed more data to form a testable hypothesis 07:20 PM

seacreature: so 4) It's easier to write regression tests when you know what actually happened, so make good use of logging 07:21 PM

seacreature: even some of the problems I was running into via my acceptance tests could have probably been caught and fixed at a lower level if I had enough data to find the source of the problem 07:21 PM

seacreature: Sometimes I do black-box testing as a cop-out, I must admit 07:22 PM

seacreature: it's like "I can't figure out where the source of this problem is coming from, I just want to know when it happens" 07:22 PM

seacreature: but that may be a smell 07:22 PM

seacreature: cool, I'm glad we lingered on this question a bit more, we shook some interesting ideas out of it 07:23 PM

seacreature: drbrain: can we take a 10 minute break and then resume for maybe 20 minutes more after that? 07:24 PM

drbrain: writing a high-level test and following up with several additional low-level tests has been a great strategy for me 07:24 PM

drbrain: sure 07:24 PM

seacreature: okay, back in a few 07:25 PM

rafadc: i'll be leaving now but I want to thank you for the talk 07:25 PM

drbrain: rafadc: you're welcome! 07:25 PM

seacreature: folks, if you've had lingering questions or thoughts this is a good time to dump them and we can address them before hitting my next questions 07:25 PM

seacreature: rafadc: thanks for attending! 07:25 PM

rafadc: bye, have fun! 07:25 PM

seacreature: we'll be doing a lot more stuff like this in the new MU, I hope :) 07:25 PM

seacreature: okay drbrain, ready to start back up? 07:35 PM

drbrain: yep! 07:35 PM

seacreature: great 07:36 PM

seacreature: who else is still with us for this marathon discussion? :) 07:36 PM

Daniel4475: me! 07:36 PM

Daniel4475: (but just got here a bit ago) 07:36 PM

seacreature: it's early morning for you, right Daniel4475? 07:37 PM

Daniel4475: yeap. china time :) 07:37 PM

seacreature: I miss the future, I will need to go back to it soon 07:38 PM

Daniel4475: lol 07:38 PM

seacreature: drbrain: okay, how about we tackle questions 8 and 9 07:39 PM

seacreature: My innocuous magic numbers and bad magic numbers 07:39 PM

seacreature: I found that there were a lot of places where I was computing values in my head and then writing tests with those results 07:39 PM

seacreature: (questions, for those who missed the link earlier: 07:39 PM

seacreature: so I don't think this is horrible: point_a =, 3) point_b =, 7) 07:40 PM

seacreature: point_a.distance(point_b).must_equal(5) 07:40 PM

drbrain: I did the same thing for testing ship movement, but I made the numbers more obvious 07:40 PM

seacreature: how so? 07:40 PM

drbrain: you've got a 3, 4, 5 right triangle there 07:41 PM

drbrain: but since it crosses 0 it's not so obvious 07:41 PM

drbrain: so I would use (0, 0) to (3, 4) 07:42 PM

seacreature: I see. 07:42 PM

drbrain: I trusted that math in ruby would work out OK 07:42 PM

seacreature: I think that I wanted to cross the 0 boundary to test that I was actually working with absolute values 07:42 PM

seacreature: But even still, while those look "obvious" if you know the distance formula, they're opaque otherwise 07:43 PM

drbrain: I might have a test for negative values too, like (0, 0) to (3, -4) 07:43 PM

seacreature: okay, that's more readable for sure 07:43 PM

drbrain: nods 07:43 PM

seacreature: but I sometimes have resistance to using numbers that are too close to ambiguous states 07:44 PM

seacreature: this is kind of a trivial example because we're talking about simple distance computations 07:44 PM

drbrain: I imagine most people who know how to calculate distances you probably also know 3, 4, 5 right triangles 07:45 PM

seacreature: but with more complicated computations, sometimes choosing something that crosses over the origin might mask edge cases 07:45 PM

drbrain: yeah, but the distance formula is pretty basic and hard to mess up 07:45 PM

seacreature: that's true, I'm mostly offering an explanation of why I used 2,3 and -1, 7 07:46 PM

seacreature: but I think maybe because of the simplicity of the computation, that extra caution is creating ugly looking examples that are not gaining much extra safety 07:47 PM

drbrain: when I'm implementing formulas or protocols where exactness is a requirement I like to assume some other readers would also have the relevant texts handy (perhaps through a link in the documentation) 07:47 PM

seacreature: so your point of view on this is that if you have a point object and a method that computes point to point distances, it's reasonable to assume someone reading the tests knows both the distance formula and common right triangles 07:48 PM

drbrain: I think so 07:49 PM

drbrain: in order to understand most of the rest of your game they'll need to understand the math basics of moving about on a coordinate grid 07:49 PM

seacreature: I suppose that even a single link like "uses two-dimensional euclidean distance formula:" 07:50 PM

seacreature: "uses a simple 3,4,5 right triangle for simplicity" 07:50 PM

seacreature: those two comments would make those tests 100x better 07:51 PM

drbrain: certainly 07:51 PM

seacreature: and would also invite the suspicious to try out other edge cases, now that they have sufficient information about the INTENT 07:51 PM

seacreature: I have no idea why i all capped INTENT 07:51 PM

seacreature: I'm getting tired :) 07:51 PM

seacreature: Okay, now... let's move on to more ugly forms of magic numbers, things that are not universal facts, but things that are arbitrary configuration values 07:52 PM

drbrain: I like to add documentation, but sometimes the source material allows future readers to ask better questions because they'll read it differently 07:52 PM

seacreature: 07:52 PM

seacreature: this test file is terrible 07:53 PM

seacreature: but I wasn't really sure how to fix it. 07:53 PM

seacreature: looking at it closer, the only thing I could think of would be to move the defined points into a configuration file as well and give them description names 07:54 PM

seacreature: fixtures[:safe_zone_points] 07:54 PM

seacreature: define those alongside the world that defines it's safezone coordinates 07:54 PM

seacreature: reference them by name in the test file 07:54 PM

seacreature: But when I do things like that I worry abstracting the tests too much makes them meaningless to read 07:55 PM

drbrain: I think there's going to be a little ugly in this kind of test no matter what 07:55 PM

drbrain: you're testing parts of your game that are core to its functionality and messing them up may not be immediately obvious 07:56 PM

drbrain: I find these kinds of core tests are always verbose 07:56 PM

seacreature: so the question is what kind of ugly would you choose? 07:57 PM

drbrain: these kinds of tests let you ask yourself fundamental questions about your design 07:58 PM

drbrain: your world is fixed at 240 units across 07:58 PM

drbrain: if that's fundamental to your game and won't ever change I think these tests are fine 07:59 PM

seacreature: Well... that's how it started 07:59 PM

drbrain: if you want to change it, perhaps you should switch from absolute coordinates to relative coordinates 07:59 PM

drbrain: or add a separate test of a world with a different radius 07:59 PM

seacreature: then I changed the worlds to have arbitrary sized regions 08:00 PM

seacreature: 08:00 PM

seacreature: jordanbyron2: how about that high school internet, eh? 08:01 PM

seacreature: drbrain: take a look at that config file and tell me what you'd do given that kind of setup 08:01 PM

drbrain: I think there's a different way to think about this test, also 08:02 PM

drbrain: it seems what you care about most is that World#region_at works: 08:02 PM

drbrain: not necessarily where the regions are 08:03 PM

drbrain: so I think this is good: 08:03 PM

drbrain: and maybe you need only one of the following tests to define the boundary conditions for region_at 08:03 PM

drbrain: while you'll still use fixed coordinates, the amount of duplication should go down 08:04 PM

seacreature: so you could basically pick points at the boundaries of each concentric circle 08:05 PM

seacreature: and show that it can cycle through all of them 08:05 PM

seacreature: and that would reduce the number of test cases (and tested points) significantly 08:05 PM

drbrain: or even test just one boundary 08:06 PM

drbrain: then test that there are four regions 08:06 PM

seacreature: well, you again end up with weird cases 08:06 PM

seacreature: the innermost and outermost could have special cases involved with them 08:07 PM

seacreature: just because the innermost is [0, r) and the outmost is [r', Infinity) 08:07 PM

drbrain: sure 08:07 PM

seacreature: so I don't feel comfortable testing just one boundary, in my mind I'd need to test three 08:08 PM

seacreature: and if I'm testing three, might as well test all four! 08:08 PM

seacreature: that said, I think that if I lift myself up from the particular example, the advice you're offering could help me with my magic number problem in general 08:08 PM

drbrain: I think we're looking at the purpose of these tests differently as well 08:08 PM

drbrain: you wrote higher-level tests referring to what the regions mean 08:09 PM

seacreature: Oh, that's another good point, these tests are somewhat vestigial, hence the FIXME at the top of the file 08:09 PM

seacreature: but I'm using them of an example of a general problem 08:09 PM

drbrain: I would write lower-level tests that make sure the regions have proper boundaries then apply meaning atop that as a second set of tests 08:09 PM

seacreature: gotcha 08:10 PM

drbrain: in this test file I see a mix of both ideas 08:10 PM

seacreature: so if i'm understanding you right, you're basically encouraging partitioning the problem space so that you're testing the right things at the right level 08:10 PM


drbrain: yes 08:11 PM

seacreature: there is probably something here to do with combinatorial testing 08:11 PM

drbrain: I think of my different tests as layers of trust 08:11 PM

drbrain: if I have a test that operates on multiple layers I find it becomes brittle as changes happen 08:12 PM

drbrain: and often it looks funny 08:12 PM

drbrain: if not ugly 08:13 PM

drbrain: Also, I think I wouldn't write tests for config/world.rb 08:14 PM

drbrain: at least, provided I knew regions behaved correctly 08:14 PM

seacreature: Yeah, I think several of these methods could just be deleted 08:14 PM

seacreature: so, the more I look at this, the more I think we're looking at something that was mid-refactor and that can be blamed for a lot of this 08:15 PM

seacreature: but let's recap the general points nonetheless 08:15 PM

drbrain: at RubyConf last year there was an excellent talk about how to use mocks when testing 08:15 PM

drbrain: but it was as much about separating your design to reduce coupling as it was about mocks 08:15 PM

seacreature: 1) when dealing with well-defined procedures, i.e. the distance formula, well placed documentation (with references to source materials) can make the numbers a lot more meaningful 08:15 PM

seacreature: the extent that you need to introduce noisy numbers into your tests (as opposed to simplified ones) has to do with how likely it is that your procedure will have edge canses 08:16 PM

seacreature: 2) 08:16 PM

seacreature: ^ 08:16 PM

seacreature: 3) You can expect with core systems to have some level of messy tests SOMEWHERE 08:17 PM

drbrain: or if not messy, strict and verbose 08:17 PM

drbrain: here's the talk: 08:17 PM

drbrain: while I still don't like to write mocks, it made me think about how I build tests in layers that don't talk to each other 08:18 PM

seacreature: 4) you, yeah, that is a great talk 08:18 PM

drbrain: this leaves me with more flexibility to make changes later 08:18 PM

seacreature: whoops! 08:18 PM

drbrain: since the tests aren't stepping on each other 08:18 PM

seacreature: right, I should re-watch that. It really feels like it's more about design than mock objects 08:19 PM

seacreature: and 4) try to keep the different layers that you are trying to test isolated from one another, so that if you do need to use some magic numbers, they are more contextually isolated 08:20 PM

seacreature: wow... we've been at this for longer than I planned to. We hit all but two of my questions 08:21 PM

seacreature: the one about how to test things like symmetries I think we can skip, some of the other ideas we discussed would help with that 08:21 PM

seacreature: and I don't want to go down another numeric rabbit hole right now 08:21 PM

drbrain: sure 08:21 PM

seacreature: but the one question I was saving for last I do want to ask if you have time for it 08:21 PM

drbrain: I do 08:22 PM

seacreature: It essentially boils down to a question of how big of a slice is safe to break off when doing a refactoring cycle 08:22 PM

seacreature: and also, how to go about breaking down things so that you can refactor in smaller cycles 08:22 PM

seacreature: I realized that if you refactor for too long under the red light, it's almost like not having tests at all 08:23 PM

seacreature: or maybe worse than not having tests even 08:23 PM

drbrain: definitely 08:23 PM

seacreature: i.e. if you are making a change that has a lot of dependencies (even if in the end it won't change the public surface much), dealing with cascading failures can be hellish and disorienting 08:24 PM

seacreature: In particular, this was a commit where I was clearly thrashing: 08:24 PM

drbrain: all but the simplest refactorings show you how good both your design and your tests are 08:24 PM

drbrain: yeah, there appear to be two separate changes in that commit 08:25 PM

seacreature: after this bad experience, what I tried to do when making another largish change was to make small refactorings around the area that I wanted to make my changes in 08:25 PM

seacreature: and that made it so that i basically smoothed out the area around where I wanted to make the cut, and I found that to be a lot less painful 08:26 PM

drbrain: I've learned the same lesson more than once myself 08:26 PM

drbrain: that's usually the strategy I used 08:26 PM

seacreature: how do you feel about adding new APIs before removing old ones? 08:27 PM

seacreature: that was one of the things I was struggling with, I was going to sort of change the interface to my object 08:27 PM

drbrain: before I answer, let me finish a thought 08:27 PM

seacreature: sure, sorry. 08:27 PM

drbrain: while at railsconf I sat with Elise Worthy who is a Hungry Academy student and pair with her 08:27 PM

drbrain: her questions were more about how to think about programming than how to do her task 08:28 PM

drbrain: (I'll have a blog post up about it later today) 08:28 PM

drbrain: but the most important tool I use is to always do the smallest, simplest step 08:28 PM

drbrain: I've learned that performing a refactor is no different than adding a feature 08:29 PM

drbrain: both are programming, so both should be done in the same small, simple steps 08:29 PM

drbrain: as to API, I almost always write the new API first then convert old callers second 08:29 PM

seacreature: this is another area where my tendency towards thinking about mathematical purity hurts me rather than helps me 08:30 PM

drbrain: while I don't do this, I think git stash might be helpful to roll back from mistakes 08:30 PM

seacreature: for example, I don't like adding new APIs before removing old ones because using a combination of the two may lead to an inconsistent object state if you aren't careful 08:30 PM

seacreature: however, that probably has to do with a lack of trust in my existing tests and production code 08:31 PM

seacreature: the adding of a new API before removing an old one does seem to be more flexible 08:32 PM

seacreature: it also helps you figure out on a gradual basis whether the new API is really a proper replacement or not 08:32 PM

drbrain: usually when it's time to remove the old API I add a raise 'no' to it which will flush out all the callers pretty quickly 08:32 PM

seacreature: ooh, that's a really neat idea 08:33 PM

drbrain: in mechanize when I perform a refactor I often reimplement the old API in terms of the new API 08:33 PM

drbrain: so if I need to keep an API around for legacy reasons it still works 08:33 PM

seacreature: yeah, for this question I'm not concerned about backwards compatibility as much as I am about having to drudge through a complex web of dependencies due to some minor change to the internals 08:34 PM

seacreature: when that happens, it may be a sign in itself of a design flaw, but you need to refactor to fix it! 08:34 PM

seacreature: I think the practice of smoothing things out around the area of the cut would help 08:35 PM

seacreature: and relaxing my obsession of doing hot swaps where you remove the old API and introduce the new one in a single swipe 08:35 PM

seacreature: and I really like the idea of raising to detect old calls 08:36 PM

seacreature: what I do is use ack for that, but it'd be nice to let the tests guide me more 08:36 PM

drbrain: I have a strong feeling that a testing style that supports refactoring is the best one, but I don't know what that means in concrete terms 08:36 PM

seacreature: sounds like an idea we should work on trying to develop 08:36 PM

seacreature: I'm sure practicing ruby readers would love that, anyway :) 08:36 PM

drbrain: my older code shows a testing style that is not so supportive of refactoring despite often having > 75% coverage, so I'll have to think longer on the difference 08:37 PM

drbrain: I think Gregory Moeck's talk is instrumental to this idea 08:37 PM

seacreature: maybe we should do another study session where we discuss his talk, in a couple weeks from now? 08:37 PM

seacreature: that might be easier for others to participate in 08:38 PM

seacreature: since it's not as context dependent as the things I discussed today 08:38 PM

drbrain: sure 08:38 PM

seacreature: we could even see if he'd be interested in participating 08:38 PM

seacreature: anyway, this was great. I'm not sure how much you read up on the new MU plans, but I think this is exactly what I envisioned for being a good public study session 08:39 PM

seacreature: I feel bad that folks probably didn't have enough context to dig in as much as maybe they would have liked to, but when I write the summary of this meeting I think they'll be able to pull out some good ideas 08:39 PM

seacreature: and we did get some good feedback early on 08:39 PM

drbrain: I've skimmed them a couple times, it sounds like a great change for you all 08:40 PM

seacreature: I don't regret the path we took to get to where we are, but a change in strategy is needed moving forward 08:41 PM

seacreature: I think as long as we keep setting a good example of the kind of culture we are trying to create, we'll do better in public than in private anyway 08:41 PM

seacreature: thanks a lot for this session today, I learned a lot 08:41 PM

drbrain: thanks for inviting me! 08:42 PM

Daniel4475: thank you gregory and drbrain 08:43 PM

drbrain: Daniel4475: np! 08:43 PM

chastell: thanks, guys! I do hope this is how new MU looks in the future. :) 08:45 PM

chastell: 2:45 am, off to bed! o/ 08:46 PM

drbrain: chastell: good night 08:46 PM

seacreature: the important thing that we'll need to get in the habit of is synthesizing this information. 08:47 PM

seacreature: I'll try to set a good example with my writeup 08:47 PM

seacreature: but that's the hard part :) 08:47 PM

Daniel4475: seacreature: looking forward to it 08:47 PM

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.