Skip to content

Commit

Permalink
Support for building sub-models using pc.nrncore_write(path, append) (#…
Browse files Browse the repository at this point in the history
…964)

* pc.nrncore_write(datpath, append)
  Alternative to old pc.nrnbbcore_write(datpath, gidroups_vector).
  User does not need to construct, or know the format of, files.dat, when
  sequentially constructing and writing mod files. When append is 0, files.dat
  is written from scratch. When append is non-zero, reads existing files.dat,
  modifies the ngroupids line and appends groupids to end of file. This
  works also when submodels are constructed by different launches.
* Update ParallelContext.nrncore_write documentation.
* Parallel.nrnbbcore_write(path, Vector) deprecated.
  Parallel.nrncore_write(path, append) optional second arg must be
  a number.
  • Loading branch information
nrnhines authored and alexsavulescu committed Apr 13, 2021
1 parent 762cfc8 commit 7f1be7e
Show file tree
Hide file tree
Showing 5 changed files with 150 additions and 57 deletions.
32 changes: 22 additions & 10 deletions docs/python/modelspec/programmatic/network/parcon.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3289,10 +3289,10 @@ Parallel Transfer
----

.. method:: ParallelContext.nrnbbcore_write
.. method:: ParallelContext.nrncore_write

Syntax:
``pc.nrnbbcore_write([path[, gidgroup_vec]])``
``pc.nrncore_write([path[, append_files_dat]])``

Description:
Writes files describing the existing model in such a way that those
Expand Down Expand Up @@ -3320,19 +3320,31 @@ Parallel Transfer
<gidgroup>_2.dat contains all the data needed to actually construct
the cells and synapses and specify connection weights and delays.

If the second argument does not exist,
rank 0 writes a "files.dat" file with a first value that
specifies the total number of gidgroups and one gidgroup value per
If the second argument does not exist or has a value of False (or 0),
rank 0 writes a "files.dat" file with version string, a -1
indicator if there are gap junctions, and a integer value that
specifies the total number of gidgroups followed by one gidgroup value per
line for all threads of all ranks.

If the model is too large to exist in NEURON (models typcially use
an order of magnitude less memory in CoreNEURON) the model can
be constructed in NEURON as a series of submodels.
When one piece is constructed
on each rank, this function can be called with a second argument which
must be a Vector. In this case, rank 0 will NOT write a files.dat
and instead the pc.nthread() gidgroup values for the rank will be
returned in the Vector.
When one submodel is constructed
on each rank, this function can be called with a second argument
with a value of True (or nonzero) which signals that the existing
files.dat file should have its n_gidgroups line updated
and the pc.nthread() gidgroup values for each rank should be
appended to the files.dat file. Note that one can either create
submodels sequentially within a single launch, though that requires
a "teardown" function to destroy the model in preparation for building
the next submodel, or sequentially create the submodels as a series
of separate launches. A user written "teardown" function should,
in order, free all gids with :func:`gid_clear` , arrange for all
NetCon to be freed, and arrange for all Sections to be destroyed.
These latter two are straightforward if the submodel is created as
an instance of a class. An example of sequential build, nrncore_write,
teardown is the test_submodel.py in
http://github.com/neuronsimulator/ringtest.

This function requires cvode.cache_efficient(1) . Multisplit is not
supported. The model cannot be more complicated than a spike or gap
Expand Down
17 changes: 13 additions & 4 deletions src/nrniv/nrncore_write.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ size_t write_corenrn_model(const std::string& path) {
}

// accessible from ParallelContext.total_bytes()
size_t nrnbbcore_write() {
size_t nrncore_write() {
const std::string& path = get_write_path();
return write_corenrn_model(path);
}
Expand Down Expand Up @@ -225,17 +225,26 @@ static void part2(const char* path) {
}

// filename data might have to be collected at hoc level since
// pc.nrnbbcore_write might be called
// pc.nrncore_write might be called
// many times per rank since model may be built as series of submodels.
if (ifarg(2)) {
if (ifarg(2) && hoc_is_object_arg(2) && is_vector_arg(2)) {
// Legacy style. Interpreter collects groupgids and writes files.dat
Vect* cgidvec = vector_arg(2);
vector_resize(cgidvec, nrn_nthread);
double* px = vector_vec(cgidvec);
for (int i=0; i < nrn_nthread; ++i) {
px[i] = double(cgs[i].group_id);
}
}else{
write_nrnthread_task(path, cgs);
bool append = false;
if (ifarg(2)) {
if (hoc_is_double_arg(2)) {
append = (*getarg(2) != 0);
}else{
hoc_execerror("Second arg must be Vector or double.", NULL);
}
}
write_nrnthread_task(path, cgs, append);
}

part2_clean();
Expand Down
137 changes: 99 additions & 38 deletions src/nrniv/nrncore_write/io/nrncore_io.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ void write_memb_mech_types(const char* fname) {
if (nrnmpi_myid > 0) { return; } // only rank 0 writes this file
std::ofstream fs(fname);
if (!fs.good()) {
hoc_execerror("nrnbbcore_write write_mem_mech_types could not open for writing: %s\n", fname);
hoc_execerror("nrncore_write write_mem_mech_types could not open for writing: %s\n", fname);
}
write_memb_mech_types_direct(fs);
}
Expand All @@ -77,7 +77,7 @@ void write_globals(const char* fname) {

FILE* f = fopen(fname, "w");
if (!f) {
hoc_execerror("nrnbbcore_write write_globals could not open for writing: %s\n", fname);
hoc_execerror("nrncore_write write_globals could not open for writing: %s\n", fname);
}

fprintf(f, "%s\n", bbcore_write_version);
Expand Down Expand Up @@ -112,7 +112,7 @@ void write_nrnthread(const char* path, NrnThread& nt, CellGroup& cg) {
nrn_assert(snprintf(fname, 1000, "%s/%d_1.dat", path, cg.group_id) < 1000);
FILE* f = fopen(fname, "wb");
if (!f) {
hoc_execerror("nrnbbcore_write write_nrnthread could not open for writing:", fname);
hoc_execerror("nrncore_write write_nrnthread could not open for writing:", fname);
}
fprintf(f, "%s\n", bbcore_write_version);

Expand All @@ -129,7 +129,7 @@ void write_nrnthread(const char* path, NrnThread& nt, CellGroup& cg) {
nrn_assert(snprintf(fname, 1000, "%s/%d_2.dat", path, cg.group_id) < 1000);
f = fopen(fname, "w");
if (!f) {
hoc_execerror("nrnbbcore_write write_nrnthread could not open for writing:", fname);
hoc_execerror("nrncore_write write_nrnthread could not open for writing:", fname);
}

fprintf(f, "%s\n", bbcore_write_version);
Expand Down Expand Up @@ -308,26 +308,36 @@ void nrnbbcore_vecplay_write(FILE* f, NrnThread& nt) {



static void fgets_no_newline(char* s, int size, FILE* f) {
if (fgets(s, size, f) == NULL) {
fclose(f);
hoc_execerror("Error reading line in files.dat", strerror(errno));
}
int n = strlen(s);
if (n && s[n-1] == '\n') {
s[n-1] = '\0';
}
}

/** Write all dataset ids to files.dat.
*
* Format of the files.dat file is:
*
* version string
* -1 (if model uses gap junction)
* n (number of datasets)
* n (number of datasets) in format %10d
* id1
* id2
* ...
* idN
*/
void write_nrnthread_task(const char* path, CellGroup* cgs)
void write_nrnthread_task(const char* path, CellGroup* cgs, bool append)
{
// ids of datasets that will be created
std::vector<int> iSend;

// ignore empty nrnthread (has -1 id)
for (int iInt = 0; iInt < nrn_nthread; ++iInt)
{
for (int iInt = 0; iInt < nrn_nthread; ++iInt) {
if ( cgs[iInt].group_id >= 0) {
iSend.push_back(cgs[iInt].group_id);
}
Expand All @@ -336,8 +346,7 @@ void write_nrnthread_task(const char* path, CellGroup* cgs)
// receive and displacement buffers for mpi
std::vector<int> iRecv, iDispl;

if (nrnmpi_myid == 0)
{
if (nrnmpi_myid == 0) {
iRecv.resize(nrnmpi_numprocs);
iDispl.resize(nrnmpi_numprocs);
}
Expand All @@ -347,11 +356,11 @@ void write_nrnthread_task(const char* path, CellGroup* cgs)

#ifdef NRNMPI
// gather number of datasets from each task
if (nrnmpi_numprocs > 1) {
nrnmpi_int_gather(&num_datasets, begin_ptr(iRecv), 1, 0);
}else{
iRecv[0] = num_datasets;
}
if (nrnmpi_numprocs > 1) {
nrnmpi_int_gather(&num_datasets, begin_ptr(iRecv), 1, 0);
}else{
iRecv[0] = num_datasets;
}
#else
iRecv[0] = num_datasets;
#endif
Expand All @@ -360,8 +369,7 @@ void write_nrnthread_task(const char* path, CellGroup* cgs)
int iSumThread = 0;

// calculate mpi displacements
if (nrnmpi_myid == 0)
{
if (nrnmpi_myid == 0) {
for (int iInt = 0; iInt < nrnmpi_numprocs; ++iInt)
{
iDispl[iInt] = iSumThread;
Expand All @@ -374,53 +382,106 @@ void write_nrnthread_task(const char* path, CellGroup* cgs)

#ifdef NRNMPI
// gather ids into the array with correspondent offsets
if (nrnmpi_numprocs > 1) {
nrnmpi_int_gatherv(begin_ptr(iSend), num_datasets, begin_ptr(iRecvVec), begin_ptr(iRecv), begin_ptr(iDispl), 0);
}else{
for (int iInt = 0; iInt < num_datasets; ++iInt)
{
iRecvVec[iInt] = iSend[iInt];
if (nrnmpi_numprocs > 1) {
nrnmpi_int_gatherv(begin_ptr(iSend), num_datasets, begin_ptr(iRecvVec), begin_ptr(iRecv), begin_ptr(iDispl), 0);
}else{
for (int iInt = 0; iInt < num_datasets; ++iInt) {
iRecvVec[iInt] = iSend[iInt];
}
}
}
#else
for (int iInt = 0; iInt < num_datasets; ++iInt)
{
for (int iInt = 0; iInt < num_datasets; ++iInt) {
iRecvVec[iInt] = iSend[iInt];
}
#endif

/// Writing the file with task, correspondent number of threads and list of correspondent first gids
if (nrnmpi_myid == 0)
{
if (nrnmpi_myid == 0) {
// If append is false, begin a new files.dat (overwrite old if exists).
// If append is true, append groupids to existing files.dat.
// Note: The number of groupids (2nd or 3rd line) has to be
// overwritten wih the total number so far. To avoid copying
// old to new, we allocate 10 chars for that number.

std::stringstream ss;
ss << path << "/files.dat";

std::string filename = ss.str();

FILE *fp = fopen(filename.c_str(), "w");
if (!fp) {
hoc_execerror("nrnbbcore_write write_nrnthread_task could not open for writing:", filename.c_str());
FILE *fp = NULL;
if (append == false) { // start a new file
fp = fopen(filename.c_str(), "w");
if (!fp) {
hoc_execerror("nrncore_write: could not open for writing:", filename.c_str());
}
} else { // modify groupid number and append to existing file
fp = fopen(filename.c_str(), "r+");
if (!fp) {
hoc_execerror("nrncore_write append: could not open for modifying:", filename.c_str());
}
}

constexpr int max_line_len = 20;
char line[max_line_len]; // All lines are actually no larger than %10d.

if (append) {
// verify same version
fgets_no_newline(line, max_line_len, fp);
// unfortunately line has the newline
size_t n = strlen(bbcore_write_version);
if ((strlen(line) != n)
|| strncmp(line, bbcore_write_version, n) != 0) {
fclose(fp);
hoc_execerror("nrncore_write append: existing files.dat has inconsisten version:", line);
}
} else {
fprintf(fp, "%s\n", bbcore_write_version);
}

fprintf(fp, "%s\n", bbcore_write_version);

// notify coreneuron that this model involves gap junctions
if (nrnthread_v_transfer_) {
fprintf(fp, "-1\n");
if (append) {
fgets_no_newline(line, max_line_len, fp);
if (strcmp(line, "-1") != 0) {
fclose(fp);
hoc_execerror("nrncore_write append: existing files.dat does not have a gap junction indicator\n", NULL);
}
} else {
fprintf(fp, "-1\n");
}
}

// total number of datasets
fprintf(fp, "%d\n", iSumThread);
if (append) {
// this is the one that needs the space to get a new value
long pos = ftell(fp);
fgets_no_newline(line, max_line_len, fp);
int oldval = 0;
if (sscanf(line, "%d", &oldval) != 1) {
fclose(fp);
hoc_execerror("nrncore_write append: error reading number of groupids", NULL);
}
if (oldval == -1) {
fclose(fp);
hoc_execerror("nrncore_write append: existing files.dat has gap junction indicator where we expected a groupgid count.", NULL);
}
iSumThread += oldval;
fseek(fp, pos, SEEK_SET);
}
fprintf(fp, "%10d\n", iSumThread);

if (append) {
// Start writing the groupids starting at the end of the file.
fseek(fp, 0, SEEK_END);
}

// write all dataset ids
for (int i = 0; i < iRecvVec.size(); ++i)
{
for (int i = 0; i < iRecvVec.size(); ++i) {
fprintf(fp, "%d\n", iRecvVec[i]);
}

fclose(fp);
}

}

/** @brief dump mapping information to gid_3.dat file */
Expand Down
2 changes: 1 addition & 1 deletion src/nrniv/nrncore_write/io/nrncore_io.h
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ typedef void (*bbcore_write_t)(double *, int *, int *, int *, double *, Datum *,

void write_contiguous_art_data(double **data, int nitem, int szitem, FILE *f);
double *contiguous_art_data(double **data, int nitem, int szitem);
void write_nrnthread_task(const char *, CellGroup *cgs);
void write_nrnthread_task(const char *, CellGroup *cgs, bool append);
void nrnbbcore_vecplay_write(FILE *f, NrnThread &nt);

void nrn_write_mapping_info(const char *path, int gid, NrnMappingInfo &minfo);
Expand Down
19 changes: 15 additions & 4 deletions src/parallel/ocbbs.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ extern "C" {
extern void nrn_thread_stat();
extern int nrn_allow_busywait(int);
extern int nrn_how_many_processors();
extern size_t nrnbbcore_write();
extern size_t nrncore_write();
extern size_t nrnbbcore_register_mapping();
extern int nrncore_run(const char*);
extern bool nrn_trajectory_request_per_time_step_;
Expand Down Expand Up @@ -977,8 +977,18 @@ static double thread_dt(void*) {
return nrn_threads[i]._dt;
}

static double nrnbbcorewrite(void*) {
return double(nrnbbcore_write());
static double nrncorewrite_argvec(void*) {
if (ifarg(2) && !(hoc_is_object_arg(2) && is_vector_arg(2))) {
hoc_execerror("nrnbbcore_write: optional second arg is not a Vector", NULL);
}
return double(nrncore_write());
}

static double nrncorewrite_argappend(void*) {
if (ifarg(2) && !hoc_is_double_arg(2)) {
hoc_execerror("nrncore_write: optional second arg is not a number (True or False append flag)", NULL);
}
return double(nrncore_write());
}

static double nrncorerun(void*) {
Expand Down Expand Up @@ -1079,7 +1089,8 @@ static Member_func members[] = {
"dt", thread_dt,
"t", nrn_thread_t,

"nrnbbcore_write", nrnbbcorewrite,
"nrnbbcore_write", nrncorewrite_argvec,
"nrncore_write", nrncorewrite_argappend,
"nrnbbcore_register_mapping", nrnbbcore_register_mapping,
"nrncore_run", nrncorerun,

Expand Down

0 comments on commit 7f1be7e

Please sign in to comment.