Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding raja derived field #1161

Merged
merged 40 commits into from
Dec 6, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
7a74672
things making sense. now for raja and avoiding graph building
nicolemarsaglia Jun 22, 2023
24e8e42
brains in circles, but things still fine
nicolemarsaglia Jun 22, 2023
51368da
change output node in functor
nicolemarsaglia Jun 22, 2023
ce6052e
blueprint/conduit q's. preserve domain structure?
nicolemarsaglia Jun 22, 2023
2b31b68
lots of tweaks. making this a function, not an object, hope that's co…
nicolemarsaglia Jun 23, 2023
681fd81
things built
nicolemarsaglia Jun 23, 2023
21d9ba1
figure out undefined symbol error
nicolemarsaglia Jun 23, 2023
44f14f0
push current version
nicolemarsaglia Jun 29, 2023
f954f78
change dataset in place
nicolemarsaglia Jun 29, 2023
9a9dcde
move addfields from expressions to filters
nicolemarsaglia Jun 30, 2023
89a6e9a
device values?
nicolemarsaglia Jun 30, 2023
b3bf81c
close but wha ha happen to the middle field?
nicolemarsaglia Jun 30, 2023
7d85bf5
some refactoring and added a simple test
cyrush Jul 3, 2023
0e3b6eb
cleanup
nicolemarsaglia Jul 3, 2023
da92297
add missing guard for add fields test
cyrush Jul 3, 2023
fe02c7d
finish merge (post recent develop exprs renaming)
cyrush Jul 14, 2023
c2ee648
wip: identify expanded case to zero copy to vtk-m
cyrush Jul 26, 2023
e947135
use strided handle
cyrush Jul 27, 2023
28c22e5
Merge branch 'develop' into task/2022_6_raja_derived_field
nicolemarsaglia Jul 27, 2023
a1bf125
Merge branch 'task/2023_07_expand_vtkm_strided_zero_copy' into task/2…
nicolemarsaglia Jul 27, 2023
e7ae4cd
pull in new vtkm zero copy
nicolemarsaglia Jul 27, 2023
46f74f9
remove merge leftovers
nicolemarsaglia Jul 27, 2023
716ef54
add ints
nicolemarsaglia Jul 28, 2023
ef5449c
start of change explicit coord to use vtkm array handle stride
nicolemarsaglia Jul 28, 2023
07f13ad
first swipe at a coords, now to test.
nicolemarsaglia Jul 31, 2023
05cea95
ascent_vtkh_data_adapter.cpp
nicolemarsaglia Jul 31, 2023
1a48615
back to working and clean
nicolemarsaglia Aug 1, 2023
73ffef9
2d logic
nicolemarsaglia Aug 1, 2023
fba3451
this seems more right
nicolemarsaglia Aug 1, 2023
53c2208
let's finish our if statement kthxbye -- fixes nyx
nicolemarsaglia Aug 8, 2023
0581ee7
these need to go back to original
nicolemarsaglia Aug 10, 2023
24ac645
Update ascent_data_object.cpp
nicolemarsaglia Aug 10, 2023
f123f6a
complete the merge from develop
cyrush Nov 7, 2023
b7e6bd0
port to new interfaces
cyrush Nov 8, 2023
9222dea
adaptor logic update
cyrush Nov 8, 2023
d93b340
use proper node as mcarray input
cyrush Nov 8, 2023
d4b3aa2
add some more debugging output
cyrush Nov 10, 2023
1e977d5
Merge branch 'develop' into task/2023_6_raja_derived_field
cyrush Dec 6, 2023
6cd6d7a
fix for fields vs non vtk-m supported type
cyrush Dec 6, 2023
728a977
fix with one of the zstride coords calcs, simplify ascent render poly…
cyrush Dec 6, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -992,6 +992,45 @@ field_histogram(const conduit::Node &dataset,
return res;
}

//returns a node that is field1 + field2
conduit::Node
derived_field_add(const conduit::Node &dataset,
const std::string &field1,
const std::string &field2,
const std::string &out_field)
{

conduit::Node res;
for(int i = 0; i < dataset.number_of_children(); ++i)
{
const std::string path1 = "fields/" + field1;
const std::string path2 = "fields/" + field2;
const conduit::Node &dom = dataset.child(i);
if(dom.has_path(path1) && dom.has_path(path2)) //has both
{
conduit::Node values;
values = derived_field_add_reduction(dom[path1], dom[path2]);
double *values = values["values"].value();
res[out_field].set(values); //need to preserve domain structure?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cyrush blueprint question for getting my data back in my result node. What rules do I need to be following here? Do I need to maintain the domain structure for my output field? If not, am I overwriting my out field with each set? Should I instead loop over all domains append the new field to a vector and then set it at the end?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, we can't pass const Node if we want to modify it (add a new field)

I think it would would be best is to return just the resulting Node field Node and insert it into the conduit tree at a higher level (in a custom filter)

We will also need to check association of the inputs (element or vertex) are the same and make sure they are both defined on the same topology. That info is in the field node along side the values.

f1: 
  values : [..]
  topology : (string)
  association: (string)

The resulting field needs to propagate that info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok. This makes sense!

//save all into mega_values
//then set at end?

}
else if(dom.has_path(path1)) //only has path1
{
res[out_field].set(dom[path1].value());
}
else if(dom.has_path(path2)) //only has path2
{
res[out_field].set(dom[path2].value());
}
else //has neither field
continue; //?
}

return res;
}

// returns a Node containing the min, max and dim for x,y,z given a topology
conduit::Node
global_bounds(const conduit::Node &dataset, const std::string &topo_name)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@ ASCENT_API
conduit::Node global_bounds(const conduit::Node &dataset,
const std::string &topo_name);

ASCENT_API
conduit::Node derived_field_add(const conduit::Node &l_field,
const conduit::Node &r_field);
//
// NOTE: ascent_data_binning contains a RAJA version
// of binning that needs more work, but should eventually
Expand Down
181 changes: 181 additions & 0 deletions src/libs/ascent/runtimes/expressions/ascent_conduit_reductions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,77 @@ conduit::Node dispatch_memory(const conduit::Node &field,
return res;
}

//dispatch memory for a derived field (DF)
template<typename Function, typename Exec>
conduit::Node dispatch_memory_DF(const conduit::Node &l_field,
const conduit::Node &r_field,
std::string component,
const Function &func,
const Exec &exec)
{
const std::string mem_space = Exec::memory_space;

conduit::Node res;
if(field_is_float32(l_field))
{
if(!field_is_float32(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float32> l_farray(l_field);
MemoryInterface<conduit::float32> r_farray(r_field);
MemoryAccessor<conduit::float32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float32> r_accessor = r_farray.accessor(mem_space,component);
func(l_accessor, r_accessor, res, exec);
}
else if(field_is_float64(l_field))
{
if(!field_is_float64(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float32> l_farray(l_field);
MemoryInterface<conduit::float32> r_farray(r_field);
MemoryAccessor<conduit::float32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float32> r_accessor = r_farray.accessor(mem_space,component);
func(l_accessor, r_accessor, res, exec);
}
else if(field_is_int32(l_field))
{
if(!field_is_int32(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float32> l_farray(l_field);
MemoryInterface<conduit::float32> r_farray(r_field);
MemoryAccessor<conduit::float32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float32> r_accessor = r_farray.accessor(mem_space,component);
func(l_accessor, r_accessor, res, exec);
}
else if(field_is_int64(l_field))
{
if(!field_is_int64(r_field))
ASCENT_ERROR("Type dispatch: mismatch array types\n"<<
l_field.schema().to_string() <<
"\n vs. \n" <<
r_field.schema().to_string());
MemoryInterface<conduit::float32> l_farray(l_field);
MemoryInterface<conduit::float32> r_farray(r_field);
MemoryAccessor<conduit::float32> l_accessor = l_farray.accessor(mem_space,component);
MemoryAccessor<conduit::float32> r_accessor = r_farray.accessor(mem_space,component);
func(l_accessor, r_accessor, res, exec);
}
else
{
ASCENT_ERROR("Type dispatch: unsupported array type "<<
l_field.schema().to_string());
}
return res;
}

template<typename Function>
conduit::Node
exec_dispatch(const conduit::Node &field, std::string component, const Function &func)
Expand Down Expand Up @@ -195,6 +266,48 @@ exec_dispatch(const conduit::Node &field, std::string component, const Function
return res;
}

template<typename Function>
conduit::Node
exec_dispatch_DF(const conduit::Node &l_field, const conduit::Node &r_field, std::string component, const Function &func)
{

conduit::Node res;
const std::string exec_policy = ExecutionManager::execution_policy();
//std::cout<<"Exec policy "<<exec_policy<<"\n";
if(exec_policy == "serial")
{
SerialExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#if defined(ASCENT_OPENMP_ENABLED) && defined(ASCENT_RAJA_ENABLED)
else if(exec_policy == "openmp")
{
OpenMPExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
#if defined(ASCENT_CUDA_ENABLED)
else if(exec_policy == "cuda")
{
CudaExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
#if defined(ASCENT_HIP_ENABLED)
else if(exec_policy == "hip")
{
HipExec exec;
res = dispatch_memory_DF(l_field, r_field, component, func, exec);
}
#endif
else
{
ASCENT_ERROR("Execution dispatch: unsupported execution policy "<<
exec_policy);
}
return res;
}

template<typename Function>
conduit::Node
field_dispatch(const conduit::Node &field, const Function &func)
Expand Down Expand Up @@ -481,6 +594,67 @@ struct SumFunctor
}
};

struct DFAddFunctor
{
template<typename T, typename Exec>
void operator()(const MemoryAccessor<T> l_accessor,
const MemoryAccessor<T> r_accessor,
MemoryAccessor<T> output,
const Exec &) const
{
const int l_size = l_accessor.m_size;
const int r_size = r_accessor.m_size;
bool diff_sizes = false;
const int size;
const int max_size;

size = max_size = l_size;
if(l_size != r_size)
{
size = min(l_size, r_size);
max_size = max(l_size, r_size);
diff_sizes = true;
}

double values[max_size];

using for_policy = typename Exec::for_policy;
using reduce_policy = typename Exec::reduce_policy;

ascent::forall<for_policy>(0, size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = l_accessor[i] + r_accessor[i];
values[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();

if(diff_sizes)
{
if(l_size > r_size)
{
ascent::forall<for_policy>(size, l_size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = l_accessor[i];
values[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to confirm my understanding:

If one field is larger than another, the output will be sized to the larger field and the remaining vals are simply copied.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. That's what I was going for here, plus zero so simply copied what's extra. I figures it's ok to take them for different sizes? Or do I need to be concerned about topology further down the pipeline?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, yes this makes sense.

We aren't likely to hit these cases often b/c Blueprint fields on the same topology with the same assoc should share cardinality. But its much better to handle the logic vs having a real head scratcher crash down the line.

}
else
{
ascent::forall<for_policy>(size, r_size, [=] ASCENT_LAMBDA(index_t i)
{
const T val = r_accessor[i];
values[i] = val;
});
ASCENT_DEVICE_ERROR_CHECK();
}
}

output["values"].set(values);
return;
}
};

struct NanFunctor
{
template<typename T, typename Exec>
Expand Down Expand Up @@ -742,6 +916,13 @@ array_sum(const conduit::Node &array,

return res;
}

conduit::Node
derived_field_add_reduction(const conduit::Node &l_field, const conduit::Node &r_field, const std::string &component)
{
return detail::exec_dispatch_DF(l_field, r_field, component, detail::DFAddFunctor());
}

//-----------------------------------------------------------------------------
};
//-----------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,10 @@ conduit::Node ASCENT_API array_min(const conduit::Node &array,
conduit::Node ASCENT_API array_sum(const conduit::Node &array,
const std::string &exec_loc,
const std::string &component = "");

conduit::Node ASCENT_API derived_field_add_reduction(const conduit::Node &l_field,
const conduit::Node &r_field,
const std::string &component = "");
};
//-----------------------------------------------------------------------------
// -- end ascent::runtime::expressions--
Expand Down