Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] What's the future of Spec? #8289

Open
asterite opened this issue Oct 7, 2019 · 45 comments
Open

[RFC] What's the future of Spec? #8289

asterite opened this issue Oct 7, 2019 · 45 comments

Comments

@asterite
Copy link
Member

asterite commented Oct 7, 2019

After the recent refactor of spec in #8125 we lost of a bit of functionality, which I now kind of regret.

In the past spec was simple: everything ran from top to bottom as soon as you wrote things. For example:

require "spec"

describe "something" do
  some_var = SomeObj.new

  it "..." do
    use some_var
  end

  it "..." do
    use some_var
  end

  some_var.cleanup
end

In the old spec we could create an object, share it in a couple of specs, then clean up. In the new spec this isn't possible because the code surrounding it will run before the contents of it.

Now, RSpec works this way. But in RSpec you have before/after hooks, you have let and you have instance variables you can share in your specs.

We have two options:

  • go back to the old spec, but resign to some features (focus, randomized tests, maybe parallel tests)
  • continue with the current spec but add new features like before/after hooks, some form of let to be able to share variables between specs, etc.

I think I prefer the first option, only because it's simpler. Maybe we don't know focus, randomized specs and so on. The second option brings a lot more complexity, which RSpec is known for (and I don't like it).

Thoughts?

@bcardiff
Copy link
Member

bcardiff commented Oct 7, 2019

Maybe we can iterate some ideas that do not match 100% with rspec.

With implicit or explicit creation of the context of execution.

private class Context
  property something = SomeObj.new

  def {before|after|around}_{suite|each}
  end
end

describe "something", using: ctx = Context.new do
  it "..." do
    use ctx.something
  end

  it "..." do
    use ctx.something
  end
end

The feature I am most interested in is tagging the specs.

@Blacksmoke16
Copy link
Member

Blacksmoke16 commented Oct 7, 2019

At that point it wouldn't it be less complex to just do like

def do_with_context
  context = Context.new
  yield context
end

do_with_context do |ctx|
  describe "something" do
    it "..." do
      use ctx.something
    end

    it "..." do
      use ctx.something
    end
  end
end

Which worked before. It's what I was using for Athena tests. Method starts a server in the background with the given config/env settings, news up a http client pointing to that server, yields the client and executes the tests, then closes the server.

Including a feature like that would just be moving complexity from the user's code into the compiler IMO. Keeping things simple would be more preferable in my opinion.

@asterite
Copy link
Member Author

asterite commented Oct 7, 2019

@bcardiff the problem is there's no way to implement that with the current language.

👍 to revert spec to the previous form?

@j8r
Copy link
Contributor

j8r commented Oct 8, 2019

@asterite It's too early in my opinion.
It's a good selling point for the language to have fast executing tests, and with parallelization it would become even stronger.

@ysbaddaden
Copy link
Contributor

Let's remember that Ruby's Minitest has i_suck_and_my_tests_are_order_dependent! I believe tests should always be run in random order by default. Crystal has concurrency and now parallelism, so tests should always run concurrently and in parallel by default, too. Missing on these in the official test suite runner would be such a loss!

What's missing is a context to share state for each before/it/after to run in. In Minitest (and maybe RSpec) there is an instanciated class so we have a self object to rely on (isolated from other tests).

@jhass
Copy link
Member

jhass commented Oct 8, 2019

Maybe it's just time to give up on the DSL and just go the test/unit route of

class MyTest < UnitTest
  def before_all
    @shared = create_shared
  end

  def test_a
    assert_sane @shared
  end

  def test_b
    assert_happy @shared
  end
end

As much as I'd hate that...

Now of course we could have the macro foo create classes like that, but that goes against Ary's KISS wishes and also if we then allow nesting that easily creates a shitload of classes overloading the compiler, as early experiments by some of us (mine's at https://gist.github.com/jhass/e9219dd9da24204a073d) have shown in the past.

@asterite
Copy link
Member Author

asterite commented Oct 8, 2019

Thank you all for some good answers!

I'm also surprised my proposal got 3 👍 but 5 👎 so far, which means maybe the current approach isn't that bad.

I'm starting to think that if before the spec refactor doing setup/teardown was already kind of hard, now it's harder, but that might be a good thing. For example in this Haskell article they mention that to do setup/teardown you have to do it inside each test, which makes it clear what goes on (you don't need to look up up in the file to see whether there's some set up going on), so each test becomes self-contained. This is something I really like about our spec, and with this change we encourage this more (you can't have a test share a variable declared outside of it).

I do think adding before_each/before_all,after_each/after_all might come handy to set up some global state, like setting up a DB structure once for all tests, so I'll send a PR for that later.

Finally, having a form of controlled let that let you define a value to share across some tests, but where each test gets a different value/instance of it, so tests can still run in parallel, might be a good compromise.

I'll first send a PR for the hooks, then I'll draft a PR for let.

@jwoertink
Copy link
Contributor

jwoertink commented Oct 8, 2019

I love having the focus feature, and have been using it every day since it came out. I also love having the Spec.before_suite, and Spec.after_suite because using the at_exit in a spec_helper always felt like a hack.

I agree that it's nicer to have more simple code, but if the language is simple, then each project becomes more complex. I also believe that even though our Spec takes most of it's DSL from RSpec, I think it's ok if we break off and make it more "crystal-like".

We could pass in context to the blocks. Doing Spec.before_each could run before each test in the entire suite where ctx.before_each runs before each test in that context. Maybe we could have a way to add our own custom context and such...

describe SomeClass do |ctx|
  ctx.before_each do
    only_run_before_each_thing_in_ctx
  end

   describe "sub thing" do |ctx2|
      ctx2.before_each do
         another_block
      end
   end
end

Not to say it needs to be exactly like that, but the idea being that we don't have to copy rspec 100%. We just need what's best for our community to make testing a first class citizen, and fun to do while being robust.

edit Since I had to sell my team on why I believe keeping specs on the path it is now is the way to go.

It's my understanding that with the new spec path, we can have these:

  • tagging specs & custom filter (i.e. focus)
  • random order running
  • specs in parallel
  • proper setup / teardown built-in
  • context aware specs

@paulcsmith
Copy link
Contributor

@asterite I like your point here:

For example in this Haskell article they mention that to do setup/teardown you have to do it inside each test, which makes it clear what goes on (you don't need to look up up in the file to see whether there's some set up going on)

I think this is super important. RSpec specs with tons of befores can get unwieldy. With that said, sometimes before or around is helpful for setting up and tearing down for stuff like database connections or servers that aren't really related to the test, but may be necessary to make the test work.

So I would not be against adding those features. I think the real problem is using let inside blocks and being able to override lets within other describe blocks. That and instance vars in before blocks. This makes tests so unwieldy as they grow because you have to look in a million places to figure out where data is being set up and passed around. I'd consider it a feature if you could not pass data from before blocks or set instance vars in them. Instead before is meant for test setup like starting servers, starting a db transactions, etc.

If you have a lot of setup I think private methods work super well, but I realize this may also be my opinion. Some people love let

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 12, 2019

I've been using my own patch that adds around_each for some time now, and as has been previously mentioned, this is absolutely necessary if you are wrapping specs in a database transaction, as this cannot be done with simple before/after hooks. I'd be happy to submit a PR if that is the direction we are going.

Regarding spec ordering, in my opinion specs should be randomly sorted by default, with the option to specify alternate orderings. There are so many errors that can go unnoticed if you always run specs in the same order. Depending on specs to be run in some particular order is an anti-pattern, and preventing it entirely would be as big a win in my book as how the compiler prevents nil reference errors at compile time.

A key piece of the random ordering feature, though, should be an option to re-run with the exact previous ordering upon failure. So upon failure, we print out "you can run crystal spec --seed j8897dhajk to re-run specs again with the current ordering". That way when someone identifies an order-dependent error, it can be reproduced and shared in a deterministic way.

I realize this creates the issue rspec has to deal with where setup code is difficult because it blocks aren't run immediately. In general, a lot of our users are coming from ruby/rails, and will be familiar with how rspec behaves, so I don't see this as a bad thing.

I agree that using instance variables in hooks is an anti-pattern, so other than adding a few more hook types like around_each, I think we should leave things alone -- no let, etc.

@asterite
Copy link
Member Author

@sam0x17 I agree

In case you are working on hooks, just know that I already submitted a draft PR for this: #8302

I'll finish it soon so we can review it.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 12, 2019

This is my feature wishlist for Spec:

  • skip: true to be consistent with focus: true
  • parallel specs (not on by default, as many codebases aren't parallel ready I imagine)
  • random spec ordering, on by default, can be configured via Spec.default_ordering or command line parameters. Can specify seed via Spec.random_seed or crystal spec --seed [seed]. Seed is printed upon any failures for easy reproduction of order-dependent errors.
  • around_each hooks
  • after_failure hooks -- useful for recording failure statistics / in CI situations if there are non-deterministic errors that fail a small percentage of the time. Block should be provided with an object representing the failure, including profiling information, stack trace.
  • printed warnings for specs that take longer than [some threshold]. Threshold can be configured via Spec.runtime_warning_threshold or crystal spec --threshold 5s.
  • group: :some_name shorthand for defining groups of specs that can be run like crystal spec --group some_name. Allow for a spec to belong to multiple groups via group: [:slow_specs, :some_name]. Also pre-populate the hierarchy of describe values as group names for the current spec, so you can do crystal spec --group User and all specs within describe User will run.
  • depends_on: :group_name allows creating a dependency hierarchy. Could use this to figure out the minimal set of specs that need to run to cover a particular feature/change. A lot of this could be automatic, e.g. an untracked change in the User class means the spec group for User needs to run. crystal spec --minimal would try to run just the specs that directly cover untracked git changes. crystal spec --auto would try to first run all specs that cover features that are depended on by specs in crystal spec --minimal followed by all specs in crystal spec --minimal. Specs that live in a "bad" describe block that can't be automatically linked to a file or piece of code, e.g. describe "some string that provides no context", would just always run since we can't know what code those specs cover so we have to assume they are always needed. The main idea of this approach is you run the fewest number of specs possible, and spec ordering is done such that if A depends on B, the specs for B will run first, to the best of our ability. With cyclic dependencies, we fall back to random.
  • retries: [number] syntax -- for non-deterministic / network-dependent specs that we might want to re-run several times until success in the case of failure due to a temporary network glitch or something beyond our control. Will produce a warning every time more than 50% of the retries for a spec are required for success and will produce a failure if all retries fail.

@straight-shoota
Copy link
Member

@sam0x17 When using around_each for db transactions, how do you inject the transaction instance into the example blocks?

Regarding your wish list:

  • skip: true is essentially not much different from pending. So while being simple to implement, I doubt it's worth adding another concept.
  • parallelism should eventually be possible
  • Randomisation is already happening in Spec: Add ability to randomize specs #8310
  • Grouping is already happening in Add ability to tag specs #8068
  • depends_on sounds like a lot of complexity and IMO questionable usefulness. It's defined a longer term discussion.
  • I doubt retries would be a god thing. Specs should be made sure to succeed always. The occasional failure for unrelated reasons is fine and shouldn't be hidden. Maybe it's a sign to apply some improvements instead. Besides, you can always implement some manual retry of fault-prone behaviour.

@yxhuvud
Copy link
Contributor

yxhuvud commented Oct 14, 2019

Parallel specs by default would, on the other hand, be a great way to actually find real issues early, for the same reason random ordering do. It may force some additional ways to flag stuff that uses the same external resource though.

@yxhuvud
Copy link
Contributor

yxhuvud commented Oct 14, 2019

I could see implementing bisect being a thing. Sometimes just getting a test failure for the random order isn't sufficient for actually finding what it is that trigger the error.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019

@straight-shoota that's some very good feedback, and generally I agree.

Having seen a lot of rails apps in production at various companies, I will say this. Once an app is large enough, there are almost always going to be "unstable" specs, and usually managing these becomes a CI nightmare. Are they bad? yes. Are they terrible? yes. This is a very common thing with integration testing involving javascript. Sometimes it just chaotically doesn't work. So my motivation for wanting a retries feature is to provide a framework for doing unstable specs in an organized way, rather than whatever even worse way users come up with (such as re-running the spec suite from start to finish until it passes, which I have seen too many times in the wild).

depends_on style specs is probably going to end up being my own shard when I get around to doing it. I believe in a testing methodology where coverage is detectable (imagine doing this with annotations) and we can completely determine which specs need to run for a particular set of changes or to cover a particular piece of code.

Didn't know about pending -- that does what I need just fine

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019

@straight-shoota to illustrate my point based on your comment,

The occasional failure for unrelated reasons is fine and shouldn't be hidden

Once you get to 3000+ specs (which is the average spec suite size of companies I have worked with), occasional almost always happens at that volume, unless there are zero integrations specs.

@alexanderadam
Copy link

alexanderadam commented Oct 14, 2019

Some of the mentioned features are already included in feature rich shards like Spectator (which integrates modern RSpec features and syntax).
So far I thought Spec of the standard library was just focused to be kind of minimal but yet functional. On the other hand it would obviously be nice having wonderful spec features available within the standard library as well.

Is there so far a general consensus on the 'high-level general idea' of Spec (minimal vs feature rich)?

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019 via email

@straight-shoota
Copy link
Member

Yes, stdlib spec should be minimal but include enough features for a basic experience. Additional features can be added by third party shards. depends_on and retries would better be suited for a shard.

@jkthorne
Copy link
Contributor

I prefer some of the features of spec and testunit.

I like the assert style tests. I find it to be more predictable in my assumption of what is going on. I know some people find the syntax adds clarity, but I like the simplicity of the interface. I also like the structures of describe, context and it so I use those to setup tests.

When it comes to structure, I like to have more helper methods call in specs and fewer hooks surrounding test suites. I find there are a host of problems that can come up, and they are not offset by the consciousness or line count of the code.

I want an assert style syntax with describing and it blocks and less reliance on before and after hooks.

@j8r
Copy link
Contributor

j8r commented Oct 14, 2019

I agree on "the fewer hooks, better it is".
How to solve this, that's a tough question.

@straight-shoota
Copy link
Member

How to solve this, that's a tough question.

This was already answered by @wontruefree:

have more helper methods call in specs

And I fully agree to that. Until recently, there have not been any useful hooks in spec. But we still managed to have quite extensive spec suites for the stdlib, compiler and many shards. I have never really missed hooks while writing hundreds of specs in Crystal. In fact, I've even adapted to using less specs with rspec because the directness of helper methods adds more clarity what's going on.

That being said, now that hooks have been implemented (with the updated internals this wasn't a big deal), I don't mean for them to go away. I just won't use them much (or at all). And I'll probably advocate to stay with helper methods in every project I'm contributing ;)

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019 via email

@jkthorne
Copy link
Contributor

just to clarify I think hooks are useful and handy to have in a test suite. But like in the words of the agile manifesto I prefer setup helpers over hooks.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019

These are the sorts of things I regularly have to put in hooks:

  • wrapping all specs in a database transaction
  • automatically dropping, creating, migrating, and seeding test database before the test suite is run
  • clearing the current cookie jar between specs (for integration specs)
  • clearing session-specific context information between specs (e.g. I have an Auth.current_user that I have to clear out, among other things)
  • Timecop-style stuff, sometimes for all specs in a particular file (this could be done in helpers though)
  • any analysis I want to perform on my own on all specs, such as benchmarking

Things I never use hooks for:

  • instantiating prefabs / test objects
  • setup code

@straight-shoota
Copy link
Member

@sam0x17 Yes, I've written lots of database specs as well. They're usually wrapped in transaction blocks. It's not a big deal. It's just nice to work with. It makes the connection directly available as block arg. How do you get access to that when it's initialized in a before_each hook?

Of course, hooks can be useful at times and I won't condemn using them.
And I think we all agree that hook support in spec is here to stay.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 14, 2019 via email

@Fryguy
Copy link
Contributor

Fryguy commented Oct 15, 2019

I personally think that Crystal's Spec should offer the following, but not much more:

  • basics: describe/context/it/pending + built-in matchers
  • each/all/suite setup and teardown
  • ability to set up common variables (a simpler let or some way to do it with before_each or private methods *more comment below)
  • filtering (by pattern, file glob, file:line, tag, and focus)
  • tagging/grouping
  • randomization (preferably on by default)
  • parallelism
  • maybe bisect, but I could see that being a shard

As most of these are already implemented or in PRs, I think the direction it's going is good, though I do understand that the current implementation may make it difficult to do particular things like the variable sharing.

Specifically on the assert style vs should style, I personally don't like the assert style, preferring the should/expect style. However, I think that ultimately comes down to how each person's brain prefers to see things... infix notation (a eq b) vs prefix notation (eq a b). I think it would be good in this particular instance for Crystal to offer both to accommodate both types of person.

For example in this Haskell article they mention that to do setup/teardown you have to do it inside each test, which makes it clear what goes on (you don't need to look up up in the file to see whether there's some set up going on)

I've always felt this was one of the downsides to the RSpec style...that is, a lot of setup ends up "far away" from the code under test. But, this happens with helper methods too...you create a method at the top of the file and then reference it over an over at the bottom of the file, so I'm not sure it's something that can be avoided.

I agree with @paulcsmith that where it gets extremely unwieldy is when you have lets and nestings overriding lets, and agree that avoiding that as an anti-feature would be a good thing. Instance vars in before blocks are similar in that they can override each other, so I could see that also being avoided. However, I think there needs to be a way to set up context, which includes settings up variables, and once you offer that I'm not sure you can't really prevent overrides.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 15, 2019

@Fryguy Regarding let, in projects I manage I always encourage a methodology of either creating directly in the spec, or using a prefab that gets automatically seeded right before the suite runs. Combine that with wrapping each spec in a database transaction, and running drop, create, migrate, seed before each suite run, and you get a pristine database environment, with reusable prefabs, for each spec. So long story short you can get around using let entirely, it's just down to how you set things up

In terms of what we should do, I have no problem with adding let. I just won't use it very often if at all.

@asterite
Copy link
Member Author

@sam0x17 How do you get access to the prefabs?

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 15, 2019

@asterite I use a macro that makes them available via method names at the top level (basically cheatsy global variables), and I also have Prefab.[whatever].

@Fryguy
Copy link
Contributor

Fryguy commented Oct 15, 2019

So long story short you can get around using let entirely, it's just down to how you set things up

I think databases are kind of unique in that respect because they offer transactional rollbacks, which can even support parallelism. If you're writing specs, that, say, shell out to commands that modify the system, then I think it's a bit harder. I see let (or memoized methods or whatever) being useful for things like "create a temp dir and store the name of the temp dir in a variable to use later in the spec", where I want a brand new temp dir for each spec to get that "pristine environment"

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 15, 2019

@Fryguy that's true. I never thought of it that way. If I was testing stuff like that I would default to using helper methods, but it's true one could use let.

@sam0x17
Copy link
Contributor

sam0x17 commented Oct 15, 2019

high level description of how my prefab setup works:

I have a Prefab module. I define [] and []= methods for the module, with logic for the relevant models. Models are stored in a Hash(Symbol, ParticularModel) class variable. So for the user "sam", it would be Prefab[:sam] and (via a hook), Prefab.sam. I also have logic so that when a model is accessed within a spec, it is marked as dirty, and then after each spec, all dirty models are reloaded from the db, in case they have been mutated (since specs are wrapped in a db transaction). In this way, I use the same prefab objects as much as possible, and only reload if I absolutely have to, but I always have a clean state. I then have a macro at the end of the file that makes all the prefabs available as top-level getters, so I can just do sam in a spec and it will work.

I could clean this up and make it into a shard if people are interested. It's fairly ORM agnostic, though it requires a reload method on Model.

@ruannawe
Copy link

ruannawe commented Mar 9, 2020

i have a simple example that works for me

require "spec"

require "./../../game/Round.cr"
require "./../../models/Character.cr"

private class Context
  getter :enemy

  def initialize()
    @luck = 10
    @hero = Models::Character.new("user1", 100.0, 30.to_i64)
    @enemy = Models::Character.new("user2", 100.0, 30.to_i64)
  end

  def enemy_energy_after_calling_update_state()
    _, enemy_after_round = Game::Round.new.update_state(@hero, @enemy, @luck)
    enemy_after_round.energy
  end
end

describe Game::Round do
  describe "#update_state" do
    context "when luck factor is less than 15" do
      ctx = Context.new
      it "returns enemy with the same energy before round" do
        ctx.enemy_energy_after_calling_update_state.should eq ctx.enemy.energy
      end
    end
  end
end

@Blacksmoke16
Copy link
Member

From a reusability standpoint I kinda like #8289 (comment) this approach. The main reason being since they are just classes you are able to use inheritance/modules to share common logic between tests. Such as by using inheritance, modules, and/or abstract methods to more easily setup specs for a common thing.

I ran into this recently when working on my validation shard. It would be nice to have something like:

abstract struct AbstractComparisonTestCase < Crystal::TestCase
  protected abstract def create_constraint(**args)
  protected abstract def create_validator
  protected abstract def valid_comparisons
  protected abstract def ivalid_comparisons

  def test_valid_comparisons : Nil
    self.valid_comparisons.each do |(expected, actual)|
      assert_no_violation create_validator, create_constraint(value: expected), actual
    end
  end

  def test_invalid_comparisons : Nil
    self.ivalid_comparisons.each do |(expected, actual)|
      assert_violations create_validator, create_constraint(value: expected), actual
    end
  end
end

struct EqualToValidatorTest < AbstractComparisonTestCase
  protected def create_validator
    EqualToValidator.new
  end

  protected def create_constraint(**args)
    EqualTo.new **args
  end

  protected def valid_comparisons
    [
      {1, 1},
      {"foo", "foo"},
      {ComparableMock.new(5), ComparableMock.new(5)},
      {Time.utc(2020, 4, 7), Time.utc(2020, 4, 7)},
      {1, nil, "nil"},
    ]
  end

  protected def ivalid_comparisons
    [
      {2, 1},
      {"bar", "foo"},
      {ComparableMock.new(10), ComparableMock.new(5)},
      {Time.utc(2020, 4, 8), Time.utc(2020, 4, 7)},
    ]
  end
end

My current workaround for this would be to just use private method/constants and a macro to define the describe blocks:

private def create_validator
  AVD::Constraints::EqualToValidator.new
end

private def create_constraint(**named_args)
  AVD::Constraints::EqualTo.new **named_args
end

private VALID_COMPARISONS = [
  {1, 1, "numbers"},
  {"foo", "foo", "strings"},
  {ComparableMock.new(5), ComparableMock.new(5), "comparable types"},
  {Time.utc(2020, 4, 7), Time.utc(2020, 4, 7), "times"},
  {1, nil, "nil"},
]

private INVALID_COMPARISONS = [
  {2, 1, "numbers"},
  {"bar", "foo", "strings"},
  {ComparableMock.new(10), ComparableMock.new(5), "comparable types"},
  {Time.utc(2020, 4, 8), Time.utc(2020, 4, 7), "times"},
]

# This would live in spec_helper
macro define_comparison_test
  describe "#validate" do
    VALID_COMPARISONS.each do |(expected, actual, message)|
      it "valid #{message}" do
        assert_no_violation create_validator, create_constraint(value: expected), actual
      end
    end

    INVALID_COMPARISONS.each do |(expected, actual, message)|
      it "invalid #{message}" do
        assert_violations create_validator, create_constraint(value: expected), actual
      end
    end
  end
end

describe AVD::Constraints::EqualToValidator do
  define_comparison_test
end

Id be curious to hear if anyone has a better idea

@asterite
Copy link
Member Author

@Blacksmoke16 Can you use minitest.cr instead>

@Blacksmoke16
Copy link
Member

@asterite Probably. Although I'm hesitant about introducing an external dependency JUST for that. Plus it's just something else someone has to learn if they want to contribute.

@asterite
Copy link
Member Author

I think there's no need for define_comparison_test to be a macro? If that's the case, this is just composing methods.

@Blacksmoke16
Copy link
Member

Blacksmoke16 commented Mar 26, 2020

Having the macro content be in every spec file would also work. My thought was that this way the code is centralized. I.e. if you wanted to add something else you update the macro and possibly add some new method to each file as opposed to updating the describe blocks and add a method for each file.

As as far as I'm aware, a macro is the only way to reuse a describe block in multiple places.

EDIT: I just realized I could probably just define it as a method and pass in the two comparison arrays as arguments. That would probably the better approach.

EDIT2: Eh, then I would have to pass in the validator/constraint as well. I'll prob just leave it as a macro for now 🤷

@straight-shoota
Copy link
Member

Although I'm hesitant about introducing an external dependency JUST for that. Plus it's just something else someone has to learn if they want to contribute.

When you're already hesitant to add something to your project because it's a separate shard, I'm even more so about such a huge change in stdlib JUST for that. =)

@Blacksmoke16
Copy link
Member

Blacksmoke16 commented Mar 26, 2020

Oh I'm definitely not saying we should refactor the whole stdlib to use a new Spec implementation. Just pointing out something that could be useful to give insight into future improvements/changes.

I'd for sure rather keep things simple as my current workaround is pretty decent for what its doing. If anything, some built in feature around something like data providers could be handy. Granted I haven't put much thought into if that would even be possible or how it would work in the current implementation.

@RX14
Copy link
Contributor

RX14 commented Mar 26, 2020

Plus it's just something else someone has to learn if they want to contribute.

So is a new style for the stdlib spec.

There's a huge value in having one standard way to do things, even if you don't like the standard personally.

@Blacksmoke16
Copy link
Member

Blacksmoke16 commented Aug 6, 2020

FWIW I threw https://play.crystal-lang.org/#/r/9i05 together as a proof of concept I had. IDK if we'd want something like that in the stdlib, but it was fairly easy to expand the current Spec implementation to allow for a more OOP class/method testing approach. It essentially solves the reusability problems I was having as part of #8289 (comment). It also adds some extra features like the DataProvider. In the end it's just an abstraction on top of Spec that generates the describe and it blocks based on the type/methods. The downside is it's of course another way to do what is already possible.

I'd be happy to improve the implementation/feature set if this is something we'd want in the stdlib. Otherwise, I might just make a shard with it.

EDIT: It has been released as https://github.com/athena-framework/spec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests