Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDE #1884

Merged
merged 96 commits into from Oct 5, 2022
Merged

SDE #1884

Show file tree
Hide file tree
Changes from 90 commits
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
0060463
cpu
boeschf Apr 15, 2022
dc12fda
Merge remote-tracking branch 'upstream/master' into SDE
boeschf May 2, 2022
aa07ab4
missing include
boeschf May 2, 2022
687a4c6
simplified API calls
boeschf May 3, 2022
263dd86
better random numbers
boeschf May 18, 2022
e9a3255
size_type, rand, coalesce
boeschf May 26, 2022
b46d459
Merge remote-tracking branch 'upstream/master' into SDE
boeschf May 26, 2022
8071f79
fixes after merge
boeschf May 26, 2022
5aeb5cd
removed comment
boeschf May 26, 2022
1e3518b
removed printouts
boeschf May 26, 2022
1b66a60
seed now settable via overrides
boeschf May 26, 2022
71ddbda
moved random number generation from modcc to backend
boeschf Jul 12, 2022
8a7e2bc
cosmetics
boeschf Jul 12, 2022
4e05517
moved random123 out of the way
boeschf Jul 12, 2022
ae99b0f
Merge remote-tracking branch 'upstream/master' into SDE_backend
boeschf Jul 12, 2022
d990f77
cleanup
boeschf Jul 12, 2022
93e5d66
cleanup
boeschf Jul 12, 2022
d1ecd6c
cleanup
boeschf Jul 12, 2022
4930f83
cleanup
boeschf Jul 12, 2022
ccc55f7
simd printer
boeschf Jul 12, 2022
26ff30e
export backtrace
boeschf Jul 12, 2022
214f0d0
backtrace
boeschf Jul 12, 2022
1876136
storage
boeschf Jul 13, 2022
9e7d194
gpu backend
boeschf Jul 13, 2022
be25475
gpu printer
boeschf Jul 13, 2022
ec7e609
documentation
boeschf Jul 14, 2022
51832fc
review low hanging fruit
boeschf Jul 15, 2022
ca98b1c
removed stochastic flag
boeschf Jul 17, 2022
2103820
added global seed via recipe, removed per-mechanism seed
boeschf Jul 17, 2022
2fbaf04
more detailed comments
boeschf Jul 18, 2022
52996d8
seed initialization
boeschf Jul 19, 2022
0824a96
seed initialization documentation
boeschf Jul 19, 2022
5d141aa
removed blank lines
boeschf Jul 20, 2022
e52745d
Update doc/concepts/mechanisms.rst
boeschf Aug 2, 2022
f06979d
Update doc/concepts/mechanisms.rst
boeschf Aug 2, 2022
1443dc5
Update doc/cpp/simulation.rst
boeschf Aug 2, 2022
ac3871c
Update doc/fileformat/nmodl.rst
boeschf Aug 2, 2022
a07fd85
link to wikipedia for Ito calculus
boeschf Aug 2, 2022
b36b885
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Aug 9, 2022
0edcfd7
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Aug 10, 2022
064b698
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Aug 29, 2022
256a272
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Aug 29, 2022
d2c5bf6
modcc tests
boeschf Aug 29, 2022
c0555ed
simulation builder test
boeschf Aug 29, 2022
7bd20a4
simulation builder test
boeschf Aug 29, 2022
40428c9
easier random values extraction with probes
boeschf Aug 30, 2022
3cdca99
python probes
boeschf Aug 30, 2022
426eef0
extensive tests for random numbers
boeschf Sep 2, 2022
479af5f
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 2, 2022
c860f4d
typo
boeschf Sep 2, 2022
db4bd21
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 5, 2022
b912862
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 5, 2022
9534a50
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 6, 2022
3fc3238
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 6, 2022
91b0e1f
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 6, 2022
913ec2a
remove warning: comparison to unsigned
boeschf Sep 6, 2022
432e02d
review part 1
boeschf Sep 13, 2022
66f9e12
review part 2
boeschf Sep 13, 2022
43cd610
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 13, 2022
addcd4e
minor cosmentic fixes
boeschf Sep 13, 2022
42864b0
doc updates
boeschf Sep 13, 2022
c7aa1b3
simplifed calling rng
boeschf Sep 13, 2022
3aedb37
simplified gpu shared state rng
boeschf Sep 13, 2022
67dddc9
bump abi version
boeschf Sep 13, 2022
79fc2e8
fix some sphinx errors
boeschf Sep 13, 2022
8bc054c
dev docs
boeschf Sep 13, 2022
e0699a7
typo
boeschf Sep 15, 2022
d605697
different solver test
boeschf Sep 15, 2022
2c3b0ec
coupled SDE
boeschf Sep 15, 2022
1d54ca8
small doc updates
boeschf Sep 19, 2022
093c9a0
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 26, 2022
44b16d9
dev doc update
boeschf Sep 26, 2022
6786464
typo
boeschf Sep 26, 2022
c2fbfb7
changed weird indentation
boeschf Sep 26, 2022
6c82d04
remove unnecessary move
boeschf Sep 26, 2022
be7a883
test with static gtest
boeschf Sep 26, 2022
a033740
gpu fix
boeschf Sep 27, 2022
fa7b7be
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Sep 27, 2022
133dc16
Merge branch 'SDE' into SDE_gpu_fix
boeschf Sep 27, 2022
c00466a
guard against multiple defines
boeschf Sep 27, 2022
9846fec
padding for gpu indices
boeschf Sep 27, 2022
7bada96
gpu and flag for not using random number generation on gpus
boeschf Sep 28, 2022
13335a0
fill empty fields in rng with default values different from zero
boeschf Sep 28, 2022
f308b33
use aligned memory (width_padded) and add better compile time flags
boeschf Sep 29, 2022
dea964e
add restrict keyword
boeschf Sep 29, 2022
b39c06d
test printouts
boeschf Sep 29, 2022
9e09b09
Merge remote-tracking branch 'origin/SDE' into SDE_gpu_fix
boeschf Sep 29, 2022
0696b4e
removed printouts
boeschf Sep 29, 2022
c881fb0
Merge pull request #1 from boeschf/SDE_gpu_fix
boeschf Sep 29, 2022
f914b51
fixed value_width_padded for simd
boeschf Sep 29, 2022
811eab0
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Oct 4, 2022
4a7547e
gpu: delegated random number generation to pimpl
boeschf Oct 4, 2022
d73eea2
forgot include
boeschf Oct 4, 2022
c305b73
Merge remote-tracking branch 'upstream/master' into SDE
boeschf Oct 4, 2022
ac87eef
fix cable solver
boeschf Oct 4, 2022
ccb18c9
white space
boeschf Oct 4, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 12 additions & 0 deletions CMakeLists.txt
Expand Up @@ -46,6 +46,8 @@ option(ARB_BACKTRACE "Enable stacktraces on assertion and exceptions (requires B

set(ARB_GPU "none" CACHE STRING "GPU backend and compiler configuration")
set_property(CACHE PROPERTY STRINGS "none" "cuda" "cuda-clang" "hip")
option(ARB_USE_GPU_RNG
"Use GPU generated random numbers (only cuda, not bitwise equal to CPU version)" OFF)

# Use bundled 3rd party libraries

Expand Down Expand Up @@ -405,6 +407,16 @@ if(ARB_VECTORIZE)
list(APPEND ARB_MODCC_FLAGS "--simd")
endif()

# Random number creation
# -----------------------------------------------

if(ARB_USE_GPU_RNG AND (ARB_WITH_NVCC OR ARB_WITH_CUDA_CLANG))
set(ARB_USE_GPU_RNG_IMPL TRUE)
else()
set(ARB_USE_GPU_RNG_IMPL FALSE)
target_compile_definitions(arbor-private-deps INTERFACE ARB_ARBOR_NO_GPU_RAND)
endif()

#----------------------------------------------------------
# Set up install paths, permissions.
#----------------------------------------------------------
Expand Down
4 changes: 4 additions & 0 deletions arbor/CMakeLists.txt
Expand Up @@ -4,6 +4,7 @@ set(arbor_sources
arbexcept.cpp
assert.cpp
backends/multicore/shared_state.cpp
backends/multicore/rand.cpp
communication/communicator.cpp
communication/dry_run_context.cpp
benchmark_cell_group.cpp
Expand Down Expand Up @@ -83,6 +84,9 @@ if(ARB_WITH_GPU)
backends/gpu/threshold_watcher.cu
memory/fill.cu
)
if (ARB_USE_GPU_RNG_IMPL)
list(APPEND arbor_sources backends/gpu/rand.cu)
endif()
endif()

if(ARB_WITH_MPI)
Expand Down
1 change: 1 addition & 0 deletions arbor/backends/gpu/gpu_store_types.hpp
Expand Up @@ -17,6 +17,7 @@ namespace gpu {

using array = memory::device_vector<arb_value_type>;
using iarray = memory::device_vector<arb_index_type>;
using sarray = memory::device_vector<arb_size_type>;

using deliverable_event_stream = arb::gpu::multi_event_stream<deliverable_event>;
using sample_event_stream = arb::gpu::multi_event_stream<sample_event>;
Expand Down
64 changes: 64 additions & 0 deletions arbor/backends/gpu/rand.cu
@@ -0,0 +1,64 @@
#include <arbor/gpu/gpu_api.hpp>
#include <arbor/gpu/gpu_common.hpp>

#include "backends/rand_impl.hpp"

namespace arb {
namespace gpu {

namespace kernel {
__global__
void generate_random_numbers(
arb_value_type* __restrict__ dst,
std::size_t width,
std::size_t width_padded,
std::size_t num_rv,
arb::cbprng::value_type seed,
arb::cbprng::value_type mech_id,
arb::cbprng::value_type counter,
arb_size_type const * __restrict__ gids,
arb_size_type const * __restrict__ idxs,
unsigned cache_size) {
// location and variable number extracted from thread block
const int i = threadIdx.x + blockDim.x*blockIdx.x;
const arb::cbprng::value_type n = blockIdx.y;

if (i < width) {
const arb::cbprng::value_type gid = gids[i];
const arb::cbprng::value_type idx = idxs[i];
const auto r = arb::cbprng::generator{}(arb::cbprng::array_type{seed, mech_id, n, counter},
arb::cbprng::array_type{gid, idx, 0xdeadf00dull, 0xdeadbeefull});
const auto a = r123::boxmuller(r[0], r[1]);
const auto b = r123::boxmuller(r[2], r[3]);
dst[i + width_padded*(0 + cache_size*n)] = a.x;
dst[i + width_padded*(1 + cache_size*n)] = a.y;
dst[i + width_padded*(2 + cache_size*n)] = b.x;
dst[i + width_padded*(3 + cache_size*n)] = b.y;
}
}
} // namespace kernel

void generate_random_numbers(
arb_value_type* dst, // points to random number storage
std::size_t width, // number of sites
std::size_t width_padded, // padded number of sites
std::size_t num_rv, // number of random variables
cbprng::value_type seed, // simulation seed value
cbprng::value_type mech_id, // mechanism id
cbprng::value_type counter, // step counter
arb_size_type const * gid, // global cell ids (size = width)
arb_size_type const * idx // per-cell location index (size = width)
) {
using impl::block_count;

unsigned const block_dim = 128;
unsigned const grid_dim_x = block_count(width, block_dim);
unsigned const grid_dim_y = num_rv;

kernel::generate_random_numbers<<<dim3{grid_dim_x, grid_dim_y, 1}, block_dim>>>(
dst, width, width_padded, num_rv, seed, mech_id, counter, gid, idx, cbprng::cache_size());
}

} // namespace gpu
} // namespace arb

110 changes: 101 additions & 9 deletions arbor/backends/gpu/shared_state.cpp
Expand Up @@ -188,7 +188,8 @@ shared_state::shared_state(
const std::vector<arb_value_type>& temperature_K,
const std::vector<arb_value_type>& diam,
const std::vector<arb_index_type>& src_to_spike,
unsigned // alignment parameter ignored.
unsigned, // alignment parameter ignored.
arb_seed_type cbprng_seed_
):
n_intdom(n_intdom),
n_detector(n_detector),
Expand All @@ -207,6 +208,7 @@ shared_state::shared_state(
diam_um(make_const_view(diam)),
time_since_spike(n_cell*n_detector),
src_to_spike(make_const_view(src_to_spike)),
cbprng_seed(cbprng_seed_),
deliverable_events(n_intdom)
{
memory::fill(time_since_spike, -1.0);
Expand Down Expand Up @@ -242,6 +244,17 @@ struct chunk_writer {
end += stride;
return p;
}

template <typename Seq, typename = std::enable_if_t<util::is_contiguous_v<Seq>>>
T* append_with_padding(Seq&& seq, typename util::sequence_traits<Seq>::value_type value) {
std::size_t n = std::size(seq);
arb_assert(n <= stride);
std::size_t r = stride - n;
auto p = append_freely(std::forward<Seq>(seq));
memory::fill(memory::device_view<typename util::sequence_traits<Seq>::value_type>(end, r), value);
end += r;
return p;
}
};
}

Expand All @@ -263,6 +276,41 @@ void shared_state::set_parameter(mechanism& m, const std::string& key, const std
memory::copy(memory::make_const_view(values), memory::device_view<arb_value_type>(data, m.ppack_.width));
}

void shared_state::update_prng_state(mechanism& m) {
if (!m.mech_.n_random_variables) return;
auto const mech_id = m.mechanism_id();
auto& store = storage[mech_id];
auto const counter = store.random_number_update_counter_++;
const auto cache_idx = cbprng::cache_index(counter);

m.ppack_.random_numbers = store.random_numbers_d_[cache_idx].data();

if (cache_idx == 0) {
// Generate random numbers every cbprng::cache_size() iterations:
// For each random variable we will generate cbprng::cache_size() values per site
// and there are width sites.
// The RNG will be seeded by a global seed, the mechanism id, the variable index, the
// current site's global cell, the site index within its cell and a counter representing
// time.
const auto num_rv = store.random_numbers_d_[0].size();
const auto width_padded = store.value_width_padded;
const auto width = m.ppack_.width;
#ifdef ARB_ARBOR_NO_GPU_RAND
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my only painpoint left:
Could we make it so that shared state doesn't need to know about this flag? Currently it's used in three
four different places, which might cause issues down the road. How about delegating these things to the
pRNG impl

  • writing the values (here)
  • setting gid/idx (ll 358)
  • allocating (ll 396)/ filling (ll 477) the indices

via the use of an appropriate interface. In said interface we can make the decision once.

// generate random numbers on the host
arb_value_type* dst = store.random_numbers_h_.data();
arb::multicore::generate_random_numbers(dst, width, width_padded, num_rv, cbprng_seed,
mech_id, counter, store.gid_.data(), store.idx_.data());
// transfer values to device
memory::gpu_memcpy_h2d(store.random_numbers_[0][0], dst,
(num_rv*cbprng::cache_size()*width_padded)*sizeof(arb_value_type));
#else
// generate random numbers on the device
generate_random_numbers(store.random_numbers_[0][0], width, width_padded, num_rv, cbprng_seed,
mech_id, counter, store.prng_indices_[0], store.prng_indices_[1]);
#endif
}
}

const arb_value_type* shared_state::mechanism_state_data(const mechanism& m, const std::string& key) {
const auto& store = storage.at(m.mechanism_id());

Expand Down Expand Up @@ -305,6 +353,14 @@ void shared_state::instantiate(mechanism& m, unsigned id, const mechanism_overri
if (storage.find(id) != storage.end()) throw arb::arbor_internal_error("Duplicate mech id in shared state");
auto& store = storage[id];

store.value_width_padded = width_padded;

#ifdef ARB_ARBOR_NO_GPU_RAND
// store indices for random number generation
store.gid_ = pos_data.gid;
store.idx_ = pos_data.idx;
#endif

// Allocate view pointers
store.state_vars_ = std::vector<arb_value_type*>(m.mech_.n_state_vars);
store.parameters_ = std::vector<arb_value_type*>(m.mech_.n_parameters);
Expand Down Expand Up @@ -333,20 +389,36 @@ void shared_state::instantiate(mechanism& m, unsigned id, const mechanism_overri

// Allocate and initialize state and parameter vectors with default values.
{
// Allocate view pointers for random nubers
std::size_t num_random_numbers_per_cv = m.mech_.n_random_variables;
std::size_t random_number_storage = num_random_numbers_per_cv*cbprng::cache_size();
for (auto& v : store.random_numbers_) v.resize(num_random_numbers_per_cv);
#ifdef ARB_ARBOR_NO_GPU_RAND
store.random_numbers_h_.resize(random_number_storage*width_padded, 0);
#endif

// Allocate bulk storage
std::size_t count = (m.mech_.n_state_vars + m.mech_.n_parameters + 1)*width_padded + m.mech_.n_globals;
std::size_t count = (m.mech_.n_state_vars + m.mech_.n_parameters + 1 +
random_number_storage)*width_padded + m.mech_.n_globals;
store.data_ = array(count, NAN);
chunk_writer writer(store.data_.data(), width);
chunk_writer writer(store.data_.data(), width_padded);

// First sub-array of data_ is used for weight_
m.ppack_.weight = writer.append(pos_data.weight);
m.ppack_.weight = writer.append_with_padding(pos_data.weight, 0);
// Set fields
for (auto idx: make_span(m.mech_.n_parameters)) {
store.parameters_[idx] = writer.fill(m.mech_.parameters[idx].default_value);
}
for (auto idx: make_span(m.mech_.n_state_vars)) {
store.state_vars_[idx] = writer.fill(m.mech_.state_vars[idx].default_value);
}
// Set random numbers
for (auto idx_v: make_span(num_random_numbers_per_cv)) {
for (auto idx_c: make_span(cbprng::cache_size())) {
store.random_numbers_[idx_c][idx_v] = writer.fill(0);
}
}

// Assign global scalar parameters. NB: Last chunk, since it breaks the width striding.
for (auto idx: make_span(m.mech_.n_globals)) store.globals_[idx] = m.mech_.globals[idx].default_value;
for (auto& [k, v]: overrides.globals) {
Expand All @@ -368,10 +440,10 @@ void shared_state::instantiate(mechanism& m, unsigned id, const mechanism_overri
// Allocate bulk storage
std::size_t count = mult_in_place + peer_indices + m.mech_.n_ions + 1;
store.indices_ = iarray(count*width_padded);
chunk_writer writer(store.indices_.data(), width);
chunk_writer writer(store.indices_.data(), width_padded);

// Setup node indices
m.ppack_.node_index = writer.append(pos_data.cv);
m.ppack_.node_index = writer.append_with_padding(pos_data.cv, 0);
// Create ion indices
for (auto idx: make_span(m.mech_.n_ions)) {
auto ion = m.mech_.ions[idx].name;
Expand All @@ -383,16 +455,33 @@ void shared_state::instantiate(mechanism& m, unsigned id, const mechanism_overri
auto ni = memory::on_host(oion->node_index_);
auto indices = util::index_into(pos_data.cv, ni);
std::vector<arb_index_type> mech_ion_index(indices.begin(), indices.end());
store.ion_states_[idx].index = writer.append(mech_ion_index);
store.ion_states_[idx].index = writer.append_with_padding(mech_ion_index, 0);
}

m.ppack_.multiplicity = mult_in_place? writer.append(pos_data.multiplicity): nullptr;
m.ppack_.multiplicity = mult_in_place? writer.append_with_padding(pos_data.multiplicity, 0): nullptr;
// `peer_index` holds the peer CV of each CV in node_index.
// Peer CVs are only filled for gap junction mechanisms. They are used
// to index the voltage at the other side of a gap-junction connection.
m.ppack_.peer_index = peer_indices? writer.append(pos_data.peer_cv): nullptr;
m.ppack_.peer_index = peer_indices? writer.append_with_padding(pos_data.peer_cv, 0): nullptr;
}

#ifndef ARB_ARBOR_NO_GPU_RAND
// Allocate and initialize index vectors for prng
{
// Allocate bulk storage
std::size_t count = 2;
store.sindices_ = sarray(count*width_padded);
chunk_writer writer(store.sindices_.data(), width_padded);
boeschf marked this conversation as resolved.
Show resolved Hide resolved

store.prng_indices_.resize(2);

store.prng_indices_[0] = writer.append_with_padding(pos_data.gid, 0);
store.prng_indices_[1] = writer.append_with_padding(pos_data.idx, 0);
}

store.prng_indices_d_ = memory::on_gpu(store.prng_indices_);
#endif

// Shift data to GPU, set up pointers
store.parameters_d_ = memory::on_gpu(store.parameters_);
m.ppack_.parameters = store.parameters_d_.data();
Expand All @@ -402,6 +491,9 @@ void shared_state::instantiate(mechanism& m, unsigned id, const mechanism_overri

store.ion_states_d_ = memory::on_gpu(store.ion_states_);
m.ppack_.ion_states = store.ion_states_d_.data();

for (auto idx_c: make_span(cbprng::cache_size()))
store.random_numbers_d_[idx_c] = memory::on_gpu(store.random_numbers_[idx_c]);
}

void shared_state::integrate_voltage() {
Expand Down
28 changes: 27 additions & 1 deletion arbor/backends/gpu/shared_state.hpp
Expand Up @@ -10,6 +10,7 @@

#include "fvm_layout.hpp"

#include "backends/rand_fwd.hpp"
#include "backends/gpu/gpu_store_types.hpp"
#include "backends/gpu/stimulus.hpp"
#include "backends/gpu/diffusion_state.hpp"
Expand Down Expand Up @@ -116,13 +117,33 @@ struct ARB_ARBOR_API shared_state {
struct mech_storage {
array data_;
iarray indices_;

// rounded up array size (multiple of preferred GPU alignment)
std::size_t value_width_padded;

std::vector<arb_value_type> globals_;
std::vector<arb_value_type*> parameters_;
std::vector<arb_value_type*> state_vars_;
std::vector<arb_ion_state> ion_states_;
memory::device_vector<arb_value_type*> parameters_d_;
memory::device_vector<arb_value_type*> state_vars_d_;
memory::device_vector<arb_ion_state> ion_states_d_;

// random number device storage
std::array<std::vector<arb_value_type*>, cbprng::cache_size()> random_numbers_;
std::array<memory::device_vector<arb_value_type*>, cbprng::cache_size()> random_numbers_d_;

// auxillary random number host storage (if GPU based generation is disabled)
std::vector<arb_value_type> random_numbers_h_;
std::vector<arb_size_type> gid_;
std::vector<arb_size_type> idx_;
// auxillary random number device storage (if GPU based generation is enabled)
sarray sindices_;
std::vector<arb_size_type*> prng_indices_;
memory::device_vector<arb_size_type*> prng_indices_d_;

// time step counter
cbprng::counter_type random_number_update_counter_ = 0u;
};

using cable_solver = arb::gpu::matrix_state_fine<arb_value_type, arb_index_type>;
Expand Down Expand Up @@ -151,6 +172,8 @@ struct ARB_ARBOR_API shared_state {
array time_since_spike; // Stores time since last spike on any detector, organized by cell.
iarray src_to_spike; // Maps spike source index to spike index

arb_seed_type cbprng_seed; // random number generator seed

istim_state stim_data;
std::unordered_map<std::string, ion_state> ion_data;
deliverable_event_stream deliverable_events;
Expand All @@ -168,14 +191,17 @@ struct ARB_ARBOR_API shared_state {
const std::vector<arb_value_type>& temperature_K,
const std::vector<arb_value_type>& diam,
const std::vector<arb_index_type>& src_to_spike,
unsigned // align parameter ignored
unsigned, // align parameter ignored
arb_seed_type cbprng_seed_ = 0u
);

// Setup a mechanism and tie its backing store to this object
void instantiate(arb::mechanism&, unsigned, const mechanism_overrides&, const mechanism_layout&);

void set_parameter(mechanism&, const std::string&, const std::vector<arb_value_type>&);

void update_prng_state(mechanism&);

// Note: returned pointer points to device memory.
const arb_value_type* mechanism_state_data(const mechanism& m, const std::string& key);

Expand Down
1 change: 0 additions & 1 deletion arbor/backends/gpu/threshold_watcher.hpp
Expand Up @@ -110,7 +110,6 @@ class threshold_watcher {
/// crossed since current time t, and the last time the test was
/// performed.
void test(array* time_since_spike) {
arb_assert(values_);

if (size()>0) {
test_thresholds_impl(
Expand Down