Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding raja derived field #1161

Merged
merged 40 commits into from
Dec 6, 2023
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
7a74672
things making sense. now for raja and avoiding graph building
nicolemarsaglia Jun 22, 2023
24e8e42
brains in circles, but things still fine
nicolemarsaglia Jun 22, 2023
51368da
change output node in functor
nicolemarsaglia Jun 22, 2023
ce6052e
blueprint/conduit q's. preserve domain structure?
nicolemarsaglia Jun 22, 2023
2b31b68
lots of tweaks. making this a function, not an object, hope that's co…
nicolemarsaglia Jun 23, 2023
681fd81
things built
nicolemarsaglia Jun 23, 2023
21d9ba1
figure out undefined symbol error
nicolemarsaglia Jun 23, 2023
44f14f0
push current version
nicolemarsaglia Jun 29, 2023
f954f78
change dataset in place
nicolemarsaglia Jun 29, 2023
9a9dcde
move addfields from expressions to filters
nicolemarsaglia Jun 30, 2023
89a6e9a
device values?
nicolemarsaglia Jun 30, 2023
b3bf81c
close but wha ha happen to the middle field?
nicolemarsaglia Jun 30, 2023
7d85bf5
some refactoring and added a simple test
cyrush Jul 3, 2023
0e3b6eb
cleanup
nicolemarsaglia Jul 3, 2023
da92297
add missing guard for add fields test
cyrush Jul 3, 2023
fe02c7d
finish merge (post recent develop exprs renaming)
cyrush Jul 14, 2023
c2ee648
wip: identify expanded case to zero copy to vtk-m
cyrush Jul 26, 2023
e947135
use strided handle
cyrush Jul 27, 2023
28c22e5
Merge branch 'develop' into task/2022_6_raja_derived_field
nicolemarsaglia Jul 27, 2023
a1bf125
Merge branch 'task/2023_07_expand_vtkm_strided_zero_copy' into task/2…
nicolemarsaglia Jul 27, 2023
e7ae4cd
pull in new vtkm zero copy
nicolemarsaglia Jul 27, 2023
46f74f9
remove merge leftovers
nicolemarsaglia Jul 27, 2023
716ef54
add ints
nicolemarsaglia Jul 28, 2023
ef5449c
start of change explicit coord to use vtkm array handle stride
nicolemarsaglia Jul 28, 2023
07f13ad
first swipe at a coords, now to test.
nicolemarsaglia Jul 31, 2023
05cea95
ascent_vtkh_data_adapter.cpp
nicolemarsaglia Jul 31, 2023
1a48615
back to working and clean
nicolemarsaglia Aug 1, 2023
73ffef9
2d logic
nicolemarsaglia Aug 1, 2023
fba3451
this seems more right
nicolemarsaglia Aug 1, 2023
53c2208
let's finish our if statement kthxbye -- fixes nyx
nicolemarsaglia Aug 8, 2023
0581ee7
these need to go back to original
nicolemarsaglia Aug 10, 2023
24ac645
Update ascent_data_object.cpp
nicolemarsaglia Aug 10, 2023
f123f6a
complete the merge from develop
cyrush Nov 7, 2023
b7e6bd0
port to new interfaces
cyrush Nov 8, 2023
9222dea
adaptor logic update
cyrush Nov 8, 2023
d93b340
use proper node as mcarray input
cyrush Nov 8, 2023
d4b3aa2
add some more debugging output
cyrush Nov 10, 2023
1e977d5
Merge branch 'develop' into task/2023_6_raja_derived_field
cyrush Dec 6, 2023
6cd6d7a
fix for fields vs non vtk-m supported type
cyrush Dec 6, 2023
728a977
fix with one of the zstride coords calcs, simplify ascent render poly…
cyrush Dec 6, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/libs/ascent/runtimes/ascent_expression_eval.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -876,15 +876,15 @@ initialize_functions()
field_sig["args/component/optional"];
field_sig["args/component/description"] =
"Used to specify a single component if the field is a vector field.";
field_sig["description"] = "Return a mesh field given a its name.";
field_sig["description"] = "Return a mesh field given its name.";

//---------------------------------------------------------------------------

conduit::Node &topo_sig = (*functions)["topo"].append();
topo_sig["return_type"] = "topo";
topo_sig["filter_name"] = "topo";
topo_sig["args/arg1/type"] = "string";
topo_sig["description"] = "Return a mesh topology given a its name.";
topo_sig["description"] = "Return a mesh topology given its name.";

//---------------------------------------------------------------------------

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -992,6 +992,87 @@ field_histogram(const conduit::Node &dataset,
return res;
}

//Take in an array of fields
//add new field that is field1 + .. + fieldn
void
derived_field_add(conduit::Node &dataset,
const std::vector<std::string> &fields,
const std::string &out_field)
{
const int num_fields = fields.size();
const std::string output_path = "fields/" + out_field;
for(int i = 0; i < dataset.number_of_children(); ++i)
{
conduit::Node &dom = dataset.child(i);
for(int field = 0; field < num_fields; field++)
{
const std::string path = "fields/" + fields[field];
if(dom.has_path(path)) //has both
{
if(!dom.has_path(output_path))
{
std::cerr << "DOMAIN does not have OUTPUT PATH" << std::endl;
std::cerr << "setting output with initial info: " << std::endl;
dom[output_path]["association"] = dom[path]["association"];
dom[output_path]["topology"] = dom[path]["topology"];
if(field_is_float32(dom[path]))
{
const int vals = dom[path]["values"].dtype().number_of_elements();
std::cerr << "num vals: " << vals << std::endl;
std::vector<conduit::float32> zeroes(vals,0.0);
std::cerr << "zeroes size: " << zeroes.size() << std::endl;
dom[output_path]["values"].set(zeroes);
}
else
{
const int vals = dom[path]["values"].dtype().number_of_elements();
std::cerr << "num vals: " << vals << std::endl;
std::vector<conduit::float64> zeroes(vals,0.0);
dom[output_path]["values"].set(zeroes);
std::cerr << "zeroes size: " << zeroes.size() << std::endl;
std::cerr << "output path initialized to zeroes: " << std::endl;
dom[output_path]["values"].print();
}
std::string out_assoc = dom[output_path]["association"].to_string();
std::string out_topo = dom[output_path]["topology"].to_string();
std::cerr << "set output topo as: " << out_topo << std::endl;
std::cerr << "set output assoc as: " << out_assoc << std::endl;
std::cerr << "number of elements: " << dom[path]["values"].dtype().number_of_elements() << std::endl;
}
else
{
std::cerr << "DOMAIN HAS OUTPUT PATH" << std::endl;
std::string out_assoc = dom[output_path]["association"].to_string();
std::string out_topo = dom[output_path]["topology"].to_string();
std::string f_assoc = dom[path]["association"].to_string();
std::string f_topo = dom[path]["topology"].to_string();
if(out_assoc != f_assoc)
{
ASCENT_ERROR("Field associations do not match:\n " <<
"Field " << fields[field] << " has association " << f_assoc << "\n" <<
"Field " << out_field << " has association " << out_assoc << "\n");
}
if(out_topo != f_topo)
{
ASCENT_ERROR("Field topologies do not match:\n " <<
"Field " << fields[field] << " has topology " << f_topo << "\n" <<
"Field " << out_field << " has topology " << out_topo << "\n");
}
}
std::cerr << "dom before add_reduction" << std::endl;
dom.print();
dom[output_path]["values"] = derived_field_add_reduction(dom[output_path], dom[path])["values"];
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data is coming out here with device_values

std::cerr << "dom after add_reduction" << std::endl;
dom.print();
}
else //does not have field
continue;
}
}

return;
}

// returns a Node containing the min, max and dim for x,y,z given a topology
conduit::Node
global_bounds(const conduit::Node &dataset, const std::string &topo_name)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ ASCENT_API
conduit::Node global_bounds(const conduit::Node &dataset,
const std::string &topo_name);

ASCENT_API
void derived_field_add(conduit::Node &dataset,
const std::vector<std::string> &fields,
const std::string &output_field);
//
// NOTE: ascent_data_binning contains a RAJA version
// of binning that needs more work, but should eventually
Expand Down
189 changes: 189 additions & 0 deletions src/libs/ascent/runtimes/expressions/ascent_conduit_reductions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,77 @@ conduit::Node dispatch_memory(const conduit::Node &field,
return res;
}

//dispatch memory for a derived field (DF)
template<typename Function, typename Exec>
conduit::Node dispatch_memory_DF(const conduit::Node &l_field,
const conduit::Node &r_field,
std::string component,
const Function &func,
const Exec &exec)
{
const std::string mem_space = Exec::memory_space;

conduit::Node res;
if(field_is_float32(l_field))
{
if(!field_is_float32(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float32> l_farray(l_field);
MemoryInterface<conduit::float32> r_farray(r_field);
MemoryAccessor<conduit::float32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float32> r_accessor = r_farray.accessor(mem_space,component);
res = func(l_accessor, r_accessor, exec);
}
else if(field_is_float64(l_field))
{
if(!field_is_float64(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float64> l_farray(l_field);
MemoryInterface<conduit::float64> r_farray(r_field);
MemoryAccessor<conduit::float64> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float64> r_accessor = r_farray.accessor(mem_space,component);
res = func(l_accessor, r_accessor, exec);
}
else if(field_is_int32(l_field))
{
if(!field_is_int32(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::int32> l_farray(l_field);
MemoryInterface<conduit::int32> r_farray(r_field);
MemoryAccessor<conduit::int32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::int32> r_accessor = r_farray.accessor(mem_space,component);
res = func(l_accessor, r_accessor, exec);
}
else if(field_is_int64(l_field))
{
if(!field_is_int64(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::int64> l_farray(l_field);
MemoryInterface<conduit::int64> r_farray(r_field);
MemoryAccessor<conduit::int64> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::int64> r_accessor = r_farray.accessor(mem_space,component);
res = func(l_accessor, r_accessor, exec);
}
else
{
ASCENT_ERROR("Type dispatch: unsupported array type "<<
l_field.schema().to_string());
}
return res;
}

template<typename Function>
conduit::Node
exec_dispatch(const conduit::Node &field, std::string component, const Function &func)
Expand Down Expand Up @@ -195,6 +266,48 @@ exec_dispatch(const conduit::Node &field, std::string component, const Function
return res;
}

template<typename Function>
conduit::Node
exec_dispatch_DF(const conduit::Node &l_field, const conduit::Node &r_field, std::string component, const Function &func)
{

conduit::Node res;
const std::string exec_policy = ExecutionManager::execution_policy();
//std::cout<<"Exec policy "<<exec_policy<<"\n";
if(exec_policy == "serial")
{
SerialExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#if defined(ASCENT_OPENMP_ENABLED) && defined(ASCENT_RAJA_ENABLED)
else if(exec_policy == "openmp")
{
OpenMPExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
#if defined(ASCENT_CUDA_ENABLED)
else if(exec_policy == "cuda")
{
CudaExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
#if defined(ASCENT_HIP_ENABLED)
else if(exec_policy == "hip")
{
HipExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
else
{
ASCENT_ERROR("Execution dispatch: unsupported execution policy "<<
exec_policy);
}
return res;
}

template<typename Function>
conduit::Node
field_dispatch(const conduit::Node &field, const Function &func)
Expand Down Expand Up @@ -481,6 +594,75 @@ struct SumFunctor
}
};

struct DFAddFunctor
{
template<typename T, typename Exec>
conduit::Node operator()(const MemoryAccessor<T> l_accessor,
const MemoryAccessor<T> r_accessor,
const Exec &) const
{
const int l_size = l_accessor.m_size;
const int r_size = r_accessor.m_size;
bool diff_sizes = false;
int size;
int max_size;

size = max_size = l_size;
if(l_size != r_size)
{
size = min(l_size, r_size);
max_size = max(l_size, r_size);
diff_sizes = true;
}


// conduit zero initializes this array
conduit::Node res;
res["values"].set(conduit::DataType::float64(max_size));
double *res_array = res["values"].value();

Array<double> field_sums(res_array, max_size);

double *sums_ptr = field_sums.get_ptr(Exec::memory_space);

using for_policy = typename Exec::for_policy;

ascent::forall<for_policy>(0, size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = l_accessor[i] + r_accessor[i];
sums_ptr[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();

if(diff_sizes)
{
if(l_size > r_size)
{
ascent::forall<for_policy>(size, l_size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = l_accessor[i];
sums_ptr[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to confirm my understanding:

If one field is larger than another, the output will be sized to the larger field and the remaining vals are simply copied.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. That's what I was going for here, plus zero so simply copied what's extra. I figures it's ok to take them for different sizes? Or do I need to be concerned about topology further down the pipeline?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, yes this makes sense.

We aren't likely to hit these cases often b/c Blueprint fields on the same topology with the same assoc should share cardinality. But its much better to handle the logic vs having a real head scratcher crash down the line.

}
else
{
ascent::forall<for_policy>(size, r_size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = r_accessor[i];
sums_ptr[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();
}
}

// synch the values back to the host
(void) field_sums.get_host_ptr();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyrush My domain is going into this functor with "values", and is coming out with a "device_values" added onto it.
ex:
rho_electrons21:
association: "element"
topology: "topo"
values: [0.0, 0.0, 0.0, ..., 0.0, 0.0]
device_values: [0.0, 0.0, 0.0, ..., 0.0, 0.0]

Have you seen this before? Am I messing something up here?

I was able to add a field to nothing and got correct results (as in the final image was correct, the output also had device_values). But then trying to add two fields together, it is as if it is only considering the second field. Not sure if it's pushing the input to "device_values" and then putting the fresh batch of values as "values"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, never mind. looks like device_values gets generated for the device calculations. So most likely unrelated as to why one field is overwriting the other.

return res;
}
};

struct NanFunctor
{
template<typename T, typename Exec>
Expand Down Expand Up @@ -742,6 +924,13 @@ array_sum(const conduit::Node &array,

return res;
}

conduit::Node
derived_field_add_reduction(const conduit::Node &l_field, const conduit::Node &r_field, const std::string &component)
{
return detail::exec_dispatch_DF(l_field, r_field, component, detail::DFAddFunctor());
}

//-----------------------------------------------------------------------------
};
//-----------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,10 @@ conduit::Node ASCENT_API array_min(const conduit::Node &array,
conduit::Node ASCENT_API array_sum(const conduit::Node &array,
const std::string &exec_loc,
const std::string &component = "");

conduit::Node ASCENT_API derived_field_add_reduction(const conduit::Node &l_field,
const conduit::Node &r_field,
const std::string &component = "");
};
//-----------------------------------------------------------------------------
// -- end ascent::runtime::expressions--
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1205,6 +1205,8 @@ FieldMax::execute()
(*output)["attrs/element/index"] = n_max["index"];
(*output)["attrs/element/assoc"] = n_max["assoc"];

std::cerr << "FieldMax output: " << std::endl;
output->print();
set_output<conduit::Node>(output);
}

Expand Down Expand Up @@ -3554,6 +3556,7 @@ BinByValue::execute()
set_output<conduit::Node>(output);
}


//-----------------------------------------------------------------------------
FieldSum::FieldSum() : Filter()
{
Expand Down