Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks #25

Open
itsdfish opened this issue Jul 4, 2020 · 10 comments
Open

Benchmarks #25

itsdfish opened this issue Jul 4, 2020 · 10 comments

Comments

@itsdfish
Copy link

itsdfish commented Jul 4, 2020

Hello-

I want to propose adding some performance benchmarks. I think this could be useful for testing the performance implications of design decisions and comparing DiscreteEvents to other packages. As a starting point, here is a simple model in which two functions call each other in succession. This is designed to assess the overhead of the scheduler.

using DiscreteEvents, BenchmarkTools

function chit(c)
    event!(c, fun(chat, c), after, 1.0)
end

function chat(c)
    event!(c, fun(chit, c), after, 1.0)
end

function test()
    c = Clock()
    event!(c, fun(chat, c), at, 0.0)
    run!(c, 10^5)
end

@btime test()
150.800 ms (1425057 allocations: 30.14 MiB)
Here is similar code for Simpy:

import simpy
from simpy.util import start_delayed
import timeit

def chit(env):
    yield simpy.util.start_delayed(env, chat(env), delay=1)
    #print('now=%d',  (env.now))
    #print("chit")

def chat(env):
    yield simpy.util.start_delayed(env, chit(env), delay=1)
    #print('now=%d',  (env.now))
    #print("chat")
 
    
def test():
    env = simpy.Environment()
    env.process(chat(env))
    env.run(until=100000)
    
reps = 100   
total_time = timeit.timeit(test, number=reps)
print(total_time/reps)
0.9407285390101606

Disclaimer: I am not familiar with Python and Simpy. So I cannot say this is the fairest benchmark. So it might be good to double check with someone who has more experience in Python.

It is also worth noting that this benchmark is useful for assessing the overhead a package needs to manage/schedule events. However, it becomes less relevant as more time is spent processing the events themselves.

@itsdfish
Copy link
Author

itsdfish commented Jul 5, 2020

I think this is an analogous model for SimJulia, but someone should confirm.

using SimJulia, ResumableFunctions, BenchmarkTools

@resumable function chit(env::Environment)
    @yield timeout(env, 1)
    # println("chit ", now(env))
    p = @process chat(env)
    @yield p
end

@resumable function chat(env::Environment)
    @yield timeout(env, 1)
    # println("chat ", now(env))
    p = @process chit(env)
    @yield p
end

function test()
    sim = Simulation()
    @process chit(sim)
    run(sim, 10^5)
    return sim
end

@btime test()

687.656 ms (2946944 allocations: 107.53 MiB)

@pbayer
Copy link
Collaborator

pbayer commented Jul 5, 2020

Hello,

thank you for your interesting code examples. You are welcome to develop that further. For now I can only give some comments

  • @non-Jedi proposed to do some benchmarks earlier on discourse. Maybe he is interested to collaborate.
  • I then developed some benchmarks, which are now on DiscreteEventsCompanion.jl. They were very helpful for development since they allowed to compare performance of subsequent improvement steps.
  • But they don't yet compare the performance of DiscreteEvents.jl, SimJulia and SimPy as you did. Your results look promising, but perhaps it is yet early to do such comparisons since DiscreteEvents.jl is not yet a mature package.
  • Comparing packages should also include how convenient it is to express simulation problems with them.
  • I choose to take real examples as benchmarks for DiscreteEvents.jl and would like very much if someone could express them in other packages.
  • Certainly the converse is also true: it would be helpful to see how it goes to express their examples in DiscreteEvents.jl.

I very much appreciate any help with that.

@itsdfish
Copy link
Author

itsdfish commented Jul 5, 2020

Sounds good. I think we are in agreement: some of the benchmarks should focus on important features of the package (such as the performance of the scheduler), whereas others should be more realistic and focus on the relative ease of developing models in different packages. Since I only have cursory knowledge of discrete event simulations, I'll add some of the simpler examples. What I am currently thinking of right now is the queue mmc and machine shop examples here and here. These might be a good starting point comparing the packages. I'll submit a PR over the next couple of weeks at DiscreteEventsCompanion and let you review and merge the results when you think DiscreteEvents is mature enough. Nonetheless, I think having that information available might be helpful for development.

@itsdfish
Copy link
Author

itsdfish commented Jul 6, 2020

Hi @pbayer. I was hoping that you could help me with a small issue. I am trying set a condition to terminate the model. In particular, I want to terminate after all processes have terminated, rather than after a fixed amount of simulated time has elapsed. Here is what I tried:

function main(clock, ...)
   ....
   isempty(clock.processes) ? stop!(clock) : nothing
end

run!(clock,  Inf)

Unfortunately, the model continued to run indefinitely. Is there a way terminate conditionally?

pbayer added a commit that referenced this issue Jul 13, 2020
@pbayer
Copy link
Collaborator

pbayer commented Jul 13, 2020

The clock likely stopped but it did give no message. Now with the above commit I fixed that.

@hdavid16
Copy link
Member

hdavid16 commented Apr 28, 2023

@itsdfish,

great idea on benchmarking. It's really nice to see that DiscreteEvents.jl seems to be faster than SimJulia.jl. I did some benchmarking as well on a simple single server queue and got similar results. DiscreteEvents uses less memory (~60x in this case) and has less allocations (~2 orders of magnitude less in this case):

# using DiscreteEvents

julia> @benchmark DE_sim()
BenchmarkTools.Trial: 1031 samples with 1 evaluation.
 Range (min  max):  4.280 ms  14.904 ms  ┊ GC (min  max): 0.00%  69.28%
 Time  (median):     4.549 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   4.841 ms ±  1.186 ms  ┊ GC (mean ± σ):  2.72% ±  7.86%

  █▇▅▅▄▄▃▁
  ██████████▇▅▆▄▅▅▄▄▄▁▁▁▁▁▄▄▁▁▁▁▁▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▅▁▅ █
  4.28 ms      Histogram: log(frequency) by time     13.4 ms <

 Memory estimate: 1.03 MiB, allocs estimate: 22618.

# using SimJulia

julia> @benchmark SJ_sim()
BenchmarkTools.Trial: 29 samples with 1 evaluation.
 Range (min  max):  139.589 ms  234.894 ms  ┊ GC (min  max): 0.00%  3.59%
 Time  (median):     172.818 ms               ┊ GC (median):    5.14%
 Time  (mean ± σ):   176.658 ms ±  19.867 ms  ┊ GC (mean ± σ):  5.20% ± 1.31%

              █ ▃    ▃▃▃       ▃  ▃
  ▇▁▁▁▁▁▁▁▇▁▇▁█▁█▁▇▇▁███▇▇▁▁▇▇▁█▁▇█▁▁▁▁▁▇▁▇▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▇ ▁
  140 ms           Histogram: frequency by time          235 ms <

 Memory estimate: 64.83 MiB, allocs estimate: 1016871.

Code:

##
using DiscreteEvents, DataStructures

mutable struct Client
    id::Int
    arrive::Float64  # arrival time at Server A
    start::Float64  # beginning of service time at Server A
    leave::Float64  # end of service time
    priority::Int # client priority
end

function arrive(clk, count, input, M_arr)
    count[1] += 1
    client = Client(count[1], 0.0, 0.0, 0.0, count[1])
    @delay clk M_arr
    client.arrive = tau(clk)
    enqueue!(input, client, client.priority)
end

has_pending_jobs(input) = !isempty(input)
function serve(clk, input, output, M_serve)
    @wait clk has_pending_jobs(input)
    client = dequeue!(input)
    client.start = tau(clk)
    @delay clk M_serve
    client.leave = tau(clk)
    push!(output, client)
end

function DE_sim()
    M_arr = 0.5
    M_serve = 1.0
    count = [0]

    clock = Clock()
    input = PriorityQueue{Client, Real}()
    output = Client[]
    
    DiscreteEvents.@process arrive(clock, count, input, M_arr)
    DiscreteEvents.@process serve(clock, input, output, M_serve)
    @run! clock 200
    
    return clock, input, output
end

##
using SimJulia, ResumableFunctions

@resumable function arrive_and_serve(clk, count, srvr, output, M_arr, M_serve)
    count[1] += 1
    client = Client(count[1], 0.0, 0.0, 0.0, count[1])
    @yield timeout(clk, M_arr)
    client.arrive = now(clk)
    @yield put(srvr, client; priority=client.priority)
    client.start = now(clk)
    @yield timeout(clk, M_serve)
    get(srvr)
    client.leave = now(clk)
    push!(output, client)
end

function SJ_sim()
    M_arr = 0.5
    M_serve = 1.0
    count = [0]

    clock = Simulation()
    srvr = Store{Client}(clock, capacity = UInt(1))
    output = Client[]

    for _ in 1:400
        SimJulia.@process arrive_and_serve(clock, count, srvr, output, M_arr, M_serve)
    end

    run(clock, 200)

    return clock, srvr, output
end

##
using BenchmarkTools
@benchmark DE_sim()
@benchmark SJ_sim()

@itsdfish
Copy link
Author

Your benchmarks look interesting and useful. Thanks for sharing.

I'm not very familiar with SimJulia. Is there an analog in DescreteEvents for the for loop:

    for _ in 1:400
        SimJulia.@process arrive_and_serve(clock, count, srvr, output, M_arr, M_serve)
    end

I just want to make sure that is not accounting for the difference in performance.

@hdavid16
Copy link
Member

hdavid16 commented Apr 28, 2023

That might be the case here. DiscreteEvents allows creating processes that are triggered periodically. In SimJulia, you need to create each instance of the process separately, which is why I use the for loop...

The server in SimJulia also has two queues (one for adding and another for removing jobs). Each job that is in the queue is wrapped into a Put or Get struct, which increases the allocations.

@itsdfish
Copy link
Author

Thanks for clarifying. It seems that some comparisons will be more challenging than others.

@hdavid16
Copy link
Member

hdavid16 commented May 2, 2023

I just saw the work you and Paul did on DiscreteEventsCompanion: pbayer/DiscreteEventsCompanion.jl#1

Amazing how the speedup was improved just by changing how the problem is modeled. I really like the flexibility in DiscreteEvents.jl, this level of modeling flexibility seems to not be available on SimJulia.jl.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants