You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I was writing some tests for a event broadcasting system (the one I outlined in this blog post I found that, as expected, the unit tests worked very easily but when it came to testing that the entire system worked as intended end to end, it was a little more difficult because I'd be trying to test completely asynchronous and pretty decoupled behaviour.
Testing with Mock/:meck
At this point in time, the test suites inside our code base use mocks very extensively which might be a topic for another day, because I don't think I believe that mocking code in tests in the best—nor most idiomatic—approach. One nice bit about the de-facto mocking library for Elixir is that it also comes with a few assertions you can import into your test suites; namely assert_called/1.
Using this library, we could write our tests, mocking the event handling functions in the event listeners we're interested in for the particular test and then use assert_called/n to determine whether or not that function was called as expected and optionally you can provide what looks roughly like an Erlang match spec to assert that the function was called with specific arguments.
Our tests end up looking roughly like the following:
defmoduleE2ETests.EctoBroadcasterTestdouseExUnit.CasealiasEventListener.EctoHandlerimportMocksetup_with_mocks([{EctoHandler,[:passthrough],handle_event: fn{_ecto_action,_entity_type,_data}->{:ok,:mocked}end}],_tags)do:okendtest"Creating entity foo is handled by EctoHandler"doassert{:ok,%MyApp.Foo{}}=MyApp.Foos.create_foo(...)assert_calledEctoHandler.handle_event({:insert,MyApp.Foo,:_})endend
This approach basically met all of our requirements except for the fact that mocking via :meck (the de-facto Erlang mocking library and used internally by Mock) does things in a pretty heavy handed way.
At a high level, all :meck does is replace the module you're trying to mock with a slightly edited one where the functions you want to mock are inserted in place of the original implementations. This comes with three disadvantages for our particular use case:
Because we're essentially loading code into the BEAM VM, we can no longer run our tests asynchronously because we've patched code at the VM level and these patches are global
Because we're replacing the function we want to call; we can't actually test that that particular function behaves as expected; only that it was called
If we want to mock our functions purely for the assert_called/1 assertion (though this has annoyed us somewhat elsewhere too) we need to be careful and essentially re-implement any functionality that you actually expect to happen for that to be testable; which itself might lead itself to brittle mocks.
My main qualm was with the second problem; we ran into it pretty hard for a few edge cases (though we resolved the second case too!): say EctoHandler.handle_event({:delete, _, _}) needed to be implemented in a special way where it persists these actions into an ETS table and you want to assert that that actually happened, you'd be a little stuffed... image your source code looks like:
defmoduleEventListener.EctoHandlerdo...defhandle_event(:delete,entity_type,data)doLogger.log("Got delete event for #{entity_type} at #{System.monotonic_time()}")data|>EventListener.DeletePool.write()|>process_delete_pool_response()|>casedo{:ok,_}=response->responseerror->add_to_retry_queue({:delete,entity_type,data})Logger.error("Failed to add to delete pool, adding to retry queue")errorendend...end
To mock this function properly, you'd probably just have to implement your mock such that we essentially duplicate the implementation, but this of course will blow up when the mocks come out of sync with the implementation.
A less heavy handed approach
One of the coolest but possibly least used features of the BEAM is the built in ability to do tracing.
Tracing essentially lets you capture things such as messages being passed to particular processes, functions being called among other things. It does come with some disadvantages (i.e. if you're not careful you'll blow up your application because of the performance impact (be careful if you trace on a production deployment 😉)) and it's very low level but it works well and there exist several high level wrappers.
I wanted to experiment with tracing because I've never used it very much so I chose a pretty low level abstraction to do the work for me: the :dbg application which comes built into the normal Erlang OTP distribution.
I'm in the process of cleaning up the actual implementation, but if you wanted to build your own replacement for the assert_called/1 function (with a nice added feature) feel free to follow along!
You can start the :dbg application via the following:
definit()do# Flush :dbg in case it was already started prior to this:ok=:dbg.stop_clear(){:ok,_pid}=:dbg.start()# This is the important bit; by default :dbg will just log what gets traced on stdout# but we actually need to handle these traces because we want to capture that the# function was called programmatically rather than reading it off the shell/log file# This tells `:dbg` to call the `process_trace/2` function in the current running process# whenever we get a trace message{:ok,_pid}=:dbg.tracer(:process,{&process_trace/2,[]})# And this sets up `:dbg` to handle function calls:dbg.p(:all,[:call])end
Now that the tracer is started up; we need to tell it to start tracing certain functions. You can do this quite easily by passing an mfa into :dbg.tpl/4 as follows:
deftrace(m,f,a)do# :dbg.tpl traces all function calls including private functions (which we needed for our use-case)# but I think it's a good default# # The 4th parameter basically says to trace all invocations of the given mfa (`:_` means we pattern# match on everything), and we also pass in `{:return_trace}` to tell the tracer we want to receive# not just function invocation messages but also capture the return of those functions :dbg.tpl(m,f,a,[{:_,[],[{:return_trace}]}])end
Now provided a super simple stub function for process_trace/2, you should be able to see your traces come back looking all good:
# Using the following process_trace/2 implementation:defprocess_trace({:trace,_pid,:call,{module,function,args}}=_message,_state)doIO.inspect(inspect({module,function,args})<>" - call start")enddefprocess_trace({:trace,_pid,:return_from,{module,function,arity},return_value},_state)whenlength(args)==aritydoIO.inspect(inspect({module,function,arity})<>" - returns: "<>inspect(return_value))end-------------------------------------------------------------------------------------------------iex(1)>Assertions.init(){:ok,[{:matched,:nonode@nohost,64}]}iex(2)>Assertions.trace(Enum,:map,2){:ok,[{:matched,:nonode@nohost,1},{:saved,1}]}iex(3)>Enum.map([1,2,3],&&1+2)"{Enum, :map, [[1, 2, 3], #Function<44.33894411/1 in :erl_eval.expr/5>]} - call start""{Enum, :map, 2} - returns: [3, 4, 5]"[3,4,5]
This gets us most of the way to what we want! We can see what functions were called and capture the variables passed into it; as well as capturing what the functions returned.
But if you look closely, the message for :call and :return_from have a few issues:
Ideally, I'd have preferred to have a single message rather than two, because we have some additional complexity here now...
Especially because the :return_from message doesn't get the args passed into it but instead only the function's arity... which means we need to try our best to marry them up as part of the process_trace function...
Just as a note; I don't think we can pattern match on lambdas easily on the BEAM (nor do things like send them over :rpc.calls so at least its consistent 🙂
We can solve the first two problems by amending the implementation of process_trace/2 quite easily though (it might not be a foolproof implementation but it worked well enough for our usecase that I'll just add the code here verbatim):
Because whatever gets returned from process_trace/2 accumulates into future calls of process_state/2 in the second parameter, we can essentially do the above and treat it like a stack of function calls. We only emit the actual IO.inspect we care about if we end up popping the function call (because it's resolved) off the stack.
All I do from here is persist these functions into an :ets table of type :bag that we create when init/0 is called, and have process_trace({_, _, :return_from, _}, _) write to it whenever a function call succeeds.
For convenience I really hackily threw together the following macros (please don't judge me 🤣 -- this is going to be cleaned up a lot before I throw it up on hex.pm):
So as you can see; these macros give us some nice ways to solve two of the three issues we had with Mock:
We can assert that functions were called (passing in a real pattern match v.s. needing to pass in a match spec like DSL)
And we can assert that the functions were called and returned a particular value which again can be any Elixir pattern match
And as a bonus, you can bind whatever some function returned to a variable with my capture x <- function macro! 🤯
The remaining steps for this is to add a before_compile step to automatically parse invocations of called? and capture so that I can automate the Assertions.trace/3 function to make it completely transparent (it'll automatically trace the functions you care about) as well as extensively test some edge cases because we've run into a few already.
I'll also be looking into massively cleaning up these macros since they were thrown together literally in a couple of hours playing around with the ASTs I thought to test but I hope this blog post helped as an intro into tracing on the BEAM but also just a fun notepad of my iterative problem solving process 🙂
The text was updated successfully, but these errors were encountered:
Testing with tracing on the BEAM
When I was writing some tests for a event broadcasting system (the one I outlined in this blog post I found that, as expected, the unit tests worked very easily but when it came to testing that the entire system worked as intended end to end, it was a little more difficult because I'd be trying to test completely asynchronous and pretty decoupled behaviour.
Testing with
Mock
/:meck
At this point in time, the test suites inside our code base use mocks very extensively which might be a topic for another day, because I don't think I believe that mocking code in tests in the best—nor most idiomatic—approach. One nice bit about the de-facto mocking library for Elixir is that it also comes with a few assertions you can import into your test suites; namely
assert_called/1
.Using this library, we could write our tests, mocking the event handling functions in the event listeners we're interested in for the particular test and then use
assert_called/n
to determine whether or not that function was called as expected and optionally you can provide what looks roughly like an Erlang match spec to assert that the function was called with specific arguments.Our tests end up looking roughly like the following:
This approach basically met all of our requirements except for the fact that mocking via
:meck
(the de-facto Erlang mocking library and used internally byMock
) does things in a pretty heavy handed way.At a high level, all
:meck
does is replace the module you're trying to mock with a slightly edited one where the functions you want to mock are inserted in place of the original implementations. This comes with three disadvantages for our particular use case:assert_called/1
assertion (though this has annoyed us somewhat elsewhere too) we need to be careful and essentially re-implement any functionality that you actually expect to happen for that to be testable; which itself might lead itself to brittle mocks.My main qualm was with the second problem; we ran into it pretty hard for a few edge cases (though we resolved the second case too!): say
EctoHandler.handle_event({:delete, _, _})
needed to be implemented in a special way where it persists these actions into an ETS table and you want to assert that that actually happened, you'd be a little stuffed... image your source code looks like:To mock this function properly, you'd probably just have to implement your mock such that we essentially duplicate the implementation, but this of course will blow up when the mocks come out of sync with the implementation.
A less heavy handed approach
One of the coolest but possibly least used features of the BEAM is the built in ability to do tracing.
Tracing essentially lets you capture things such as messages being passed to particular processes, functions being called among other things. It does come with some disadvantages (i.e. if you're not careful you'll blow up your application because of the performance impact (be careful if you trace on a production deployment 😉)) and it's very low level but it works well and there exist several high level wrappers.
I wanted to experiment with tracing because I've never used it very much so I chose a pretty low level abstraction to do the work for me: the
:dbg
application which comes built into the normal Erlang OTP distribution.I'm in the process of cleaning up the actual implementation, but if you wanted to build your own replacement for the
assert_called/1
function (with a nice added feature) feel free to follow along!You can start the
:dbg
application via the following:Now that the tracer is started up; we need to tell it to start tracing certain functions. You can do this quite easily by passing an
mfa
into:dbg.tpl/4
as follows:Now provided a super simple stub function for
process_trace/2
, you should be able to see your traces come back looking all good:This gets us most of the way to what we want! We can see what functions were called and capture the variables passed into it; as well as capturing what the functions returned.
But if you look closely, the message for
:call
and:return_from
have a few issues::return_from
message doesn't get theargs
passed into it but instead only the function's arity... which means we need to try our best to marry them up as part of theprocess_trace
function...:rpc.call
s so at least its consistent 🙂We can solve the first two problems by amending the implementation of
process_trace/2
quite easily though (it might not be a foolproof implementation but it worked well enough for our usecase that I'll just add the code here verbatim):Because whatever gets returned from
process_trace/2
accumulates into future calls ofprocess_state/2
in the second parameter, we can essentially do the above and treat it like a stack of function calls. We only emit the actualIO.inspect
we care about if we end up popping the function call (because it's resolved) off the stack.All I do from here is persist these functions into an
:ets
table of type:bag
that we create wheninit/0
is called, and haveprocess_trace({_, _, :return_from, _}, _)
write to it whenever a function call succeeds.For convenience I really hackily threw together the following macros (please don't judge me 🤣 -- this is going to be cleaned up a lot before I throw it up on hex.pm):
Which lets us finally put it all together and do the following 🎉
So as you can see; these macros give us some nice ways to solve two of the three issues we had with
Mock
:capture x <- function
macro! 🤯The remaining steps for this is to add a
before_compile
step to automatically parse invocations ofcalled?
andcapture
so that I can automate theAssertions.trace/3
function to make it completely transparent (it'll automatically trace the functions you care about) as well as extensively test some edge cases because we've run into a few already.I'll also be looking into massively cleaning up these macros since they were thrown together literally in a couple of hours playing around with the ASTs I thought to test but I hope this blog post helped as an intro into tracing on the BEAM but also just a fun notepad of my iterative problem solving process 🙂
The text was updated successfully, but these errors were encountered: