multithreaded simulation runs #738

Closed
pcaillou opened this Issue May 30, 2015 · 11 comments
@pcaillou
Contributor
Hi there!

Is there any possibility to run simulations on more than one thread?
I'm looking for speedup options for simulations with a lot of (100000) agents.

It would be great to have multiple threads working on the reflexes for each step. As
the control flow is not (obviously) defined (i.e which agent's reflex is executed first),
threads would not introduce concurrency problems into the GAMA model. (I ca nimagine
that there might be interpreter problems).

Thanks & Cheers, Achim

Original issue reported on code.google.com by Achim.Gaedke on 2013-11-25 06:19:44

@pcaillou pcaillou self-assigned this May 30, 2015
@pcaillou
Contributor
Hi,

We are discussing this possibility and it's not obvious how to introduce it without
requiring, from the user, some "know-how" about concurrency. Contrary to what you write,
concurrency problems may arise when running reflexes in parallel, depending on what
the reflexes manipulate. Just changing the location of the agent will involve concurrent
accesses to the environment, for instance. 

Anyway, as I said, we are thinking about introducing some concurrency at some point,
but it will require careful moves. 

Do not hesitate in sharing your needs/ideas ! 

Cheers
Alexis


Original issue reported on code.google.com by alexis.drogoul on 2013-12-07 06:14:15

@pcaillou
Contributor
As the execution order of the reflexes scheduled at the same time step is not (strictly)
defined (by specification), introducing parallel execution would not cause any problem.


The ask statement could allow a context to be locked, i.e no other read/write access
is done. That would prevent the exposure of inconsistent states. (Oh, it would be great
to model the dining philosophers with a multi-threaded GAMA!)

In general I would think that ABM naturally favours parallel execution, especially
if the simulation progresses in small steps (i.e inconsistent states are still a good
approximation of "reality").

As simulations can become computationally intensive, any means of speeding them up
is highly welcome (that includes me doing smarter modelling :-) ).

Original issue reported on code.google.com by achim.gaedke@signal41.com on 2013-12-19 01:59:24

@pcaillou
Contributor
Hi,
I want to know more details about your needs, because now i've on my computer, a version
which can launch multi-thread simulation in headless mode, i mean multi-instance of
one simulation. It's look like we run 100 times of simulation, but in multi-thread.
So i want to know if your speedup needs is related with this mode or it's an sharable
multi-thread of the agents in one simulation? i mean, in your case, if we have 10000
agents, we must have 10000 threads or something like that?
Cheers.

Original issue reported on code.google.com by hqnghi88 on 2013-12-19 09:22:31

@pcaillou
Contributor
Hi!

Thanks for coming back and asking. That is much appreciated.

It would be great to use all cores of the quadcore machine to speed up the run of a
simulation (assuming that memory access is not the bottle neck). My simulations comprise
10 to 100 thousand agents, so I'd be happy to have four worker threads.

Cheers, Achim

Original issue reported on code.google.com by achim.gaedke@signal41.com on 2013-12-19 22:15:44

@pcaillou
Contributor
(No text was entered with this change)

Original issue reported on code.google.com by patrick.taillandier on 2014-02-15 11:16:16

  • Labels added: Milestone-1.7
@pcaillou
Contributor
(No text was entered with this change)

Original issue reported on code.google.com by gama.platform on 2014-04-06 09:43:02

  • Labels added: Development, Simulation, Batch
@pcaillou
Contributor
(No text was entered with this change)

Original issue reported on code.google.com by gama.platform on 2014-04-06 09:55:18

@ptaillandier ptaillandier added this to the Gama 1.7 milestone May 31, 2015
@AlexisDrogoul
Member

As a side note and reminder: the current version of GAMA supports multi-simulations experiments (but within a single thread). The solution for transforming this architecture into a multi-threaded run is not trivial, and probably involves using ThreadLocal values for all the outputs (for example, the displays maintain a state which happens to be shared among the simulations). But it is feasible: without outputs, it runs almost perfectly.

@AlexisDrogoul AlexisDrogoul added a commit that referenced this issue Feb 8, 2016
@AlexisDrogoul AlexisDrogoul Addresses Issue #738
Restricts the usage of the IScope obtained through GAMA, in order to
prevent external elements to pollute the scope. Next step will be to
provide read-only versions of the scope.
e001b6f
@AlexisDrogoul
Member

The basic proof of concept has been committed and works quite well with the displays, too — multi-simulations now run in a multi-threaded way, but they are synchronized on the steps (i.e. the steps of the simulations run in parallel, but the experiment waits, every step, for the termination of all these parallel steps). I have still to add:

  • a preference to turn it off and on by default
  • a preference to limit the number of threads used
  • a facet in experiment to enable or disable this behavior (and maybe fix the max. number of threads, too)
  • this behavior for batch experiments
  • a check to turn it off for headless experiments
@AlexisDrogoul AlexisDrogoul added a commit that referenced this issue Feb 10, 2016
@AlexisDrogoul AlexisDrogoul New addition to complete #738.
Batch experiments now use multithreading/multi-simulation for running
repetitions of simulations. This can be controlled using the `multicore`
facet or the global preferences.
0b5dd41
@AlexisDrogoul
Member

Batch experiments now use multi-threading partially (i.e. when running parallel repetitions of the simulations). Not sure if opening it more would be helpful. In any case, the speed gain is impressive when doing a large number of repetitions. I close this issue.

@AlexisDrogoul
Member

Recent developments support to run agents, within their species/grid, or as targets of a ask statement, in parallel. It is still experimental, but can be tested on the latest builds.

There are 2 possibilities to test it:

  • either to set the appropriate preferences (in "Performances > Concurrency"), which will apply to all models.
  • or to do it on a model basis, by adding the selected grid/species definitions or ask statements the facet parallel: true. The value of this facet can also be set to an integer, which then represents the minimum number of agents under which the run will be sequential. So, for instance, ask my_agents parallel: 1 { ... do something ... } will make the agents run all in parallel. And species aa parallel: 50 {...} will step in parallel batches of 50 agents. Having this level of control is important because, for simple agents, the cost of creating parallel tasks and scheduling them among the threads can be higher than their execution time, therefore ruining the interest of running them in parallel.

A default value for this 'sequential threshold' can be set in the preferences too.

Note that it is an experimental feature. Depending on the models, unexpected conflicts or errors can happen (esp. if the agents share common structures or manipulate each other). Also, although the parallel runs try to keep as much as the original sequence, there are no guarantees that the agents will be scheduled in the order defined by the modeler.

That said, preliminary testings show vast improvements for models run on multi-core architectures (up to 3x faster in some cases).

Enjoy !

@AlexisDrogoul AlexisDrogoul added a commit that referenced this issue Oct 16, 2016
@AlexisDrogoul AlexisDrogoul Fixes #2025. Fixes #738.
Generalization of the 'parallel:' facet to grid, species, ask and
experiment. Chnages in the API of IScope for supporting parallel
operations. Changes in QuadTree and GamaGraph for solving sync problems.
Changes in the step method of agents (now divided in 3 sub-methods:
preStep(), doStep(), postStep()). Addition of the
msi.gama.runtime.concurrent package and several classes dedicated to
concurrent runs.

Signed-off-by: AlexisDrogoul <alexis.drogoul@gmail.com>
683d6c4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment