Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gather one hop neighbors #2117

Merged
merged 12 commits into from
Mar 21, 2022
7 changes: 6 additions & 1 deletion ci/gpu/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,13 @@ conda activate rapids
export PATH=$(conda info --base)/envs/rapids/bin:$PATH

gpuci_logger "Install dependencies"
# Assume libcudf will be installed via cudf. This is done to prevent the
# following:
# libcudf = 22.04.00a220315, cudf = 22.04.00a220308
# where cudf 220308 was chosen possibly because it has fewer/different
# dependencies and the corresponding recipes think they're compatible when they
# may not be.
gpuci_mamba_retry install -y \
"libcudf=${MINOR_VERSION}" \
"cudf=${MINOR_VERSION}" \
"librmm=${MINOR_VERSION}" \
"rmm=${MINOR_VERSION}" \
Expand Down
4 changes: 2 additions & 2 deletions conda/recipes/libcugraph/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ requirements:
- boost-cpp>=1.66
- nccl>=2.9.9
- ucx-proc=*=gpu
- gtest
- gmock
- gtest=1.10.0 # FIXME: pinned to version in https://github.com/rapidsai/integration/blob/branch-22.04/conda/recipes/versions.yaml
- gmock=1.10.0 # FIXME: pinned to version in https://github.com/rapidsai/integration/blob/branch-22.04/conda/recipes/versions.yaml
run:
- {{ pin_compatible('cudatoolkit', max_pin='x', min_pin='x') }}
- libraft-headers {{ minor_version }}
Expand Down
373 changes: 373 additions & 0 deletions cpp/include/cugraph/detail/decompress_matrix_partition.cuh

Large diffs are not rendered by default.

66 changes: 52 additions & 14 deletions cpp/include/cugraph/detail/graph_functions.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,13 @@ std::tuple<rmm::device_uvector<typename GraphViewType::edge_type>,
rmm::device_uvector<typename GraphViewType::edge_type>>
get_global_degree_information(raft::handle_t const& handle, GraphViewType const& graph_view);

template <typename GraphViewType>
rmm::device_uvector<typename GraphViewType::edge_type> get_global_adjacency_offset(
raft::handle_t const& handle,
GraphViewType const& graph_view,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_degree_offsets,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_out_degrees);

/**
* @brief Gather active sources and associated client gpu ids across gpus in a
* column communicator
Expand Down Expand Up @@ -158,34 +165,65 @@ partition_information(raft::handle_t const& handle, GraphViewType const& graph_v
* Collect all the edges that are present in the adjacency lists on the current gpu
*
* @tparam GraphViewType Type of the passed non-owning graph object.
* @tparam EdgeIndexIterator Type of the iterator for edge indices.
* @tparam GPUIdIterator Type of the iterator for gpu id identifiers.
kaatish marked this conversation as resolved.
Show resolved Hide resolved
* @param handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator, and
* handles to various CUDA libraries) to run graph algorithms.
* @param graph_view Non-owning graph object.
* @param active_majors_in_row Device vector containing all the vertex id that are processed by
* @param[in] handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator,
* and handles to various CUDA libraries) to run graph algorithms.
* @param[in] graph_view Non-owning graph object.
* @param[in] active_majors_in_row Device vector containing all the vertex id that are processed by
seunghwak marked this conversation as resolved.
Show resolved Hide resolved
* gpus in the column communicator
* @param active_major_gpu_ids Device vector containing the gpu id associated by every vertex
* @param[in] active_major_gpu_ids Device vector containing the gpu id associated by every vertex
* present in active_majors_in_row
* @param edge_index_first Iterator pointing to the first destination index
* @param indices_per_source Number of indices supplied for every source in the range
* @param[in] minor_map Device vector of destination indices (modifiable in-place) corresponding to
* vertex IDs being returned
kaatish marked this conversation as resolved.
Show resolved Hide resolved
* @param[in] indices_per_source Number of indices supplied for every source in the range
kaatish marked this conversation as resolved.
Show resolved Hide resolved
* [vertex_input_first, vertex_input_last)
* @param global_degree_offset Global degree offset to local adjacency list for every source
* @param[in] global_degree_offset Global degree offset to local adjacency list for every source
kaatish marked this conversation as resolved.
Show resolved Hide resolved
* represented by current gpu
* @return A tuple of device vector containing the majors, minors and gpu_ids gathered locally
* @return A tuple of device vector containing the majors, minors, gpu_ids and indices gathered
* locally
*/
template <typename GraphViewType, typename EdgeIndexIterator, typename gpu_t>
template <typename GraphViewType, typename gpu_t>
std::tuple<rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<gpu_t>>
rmm::device_uvector<gpu_t>,
rmm::device_uvector<typename GraphViewType::edge_type>>
gather_local_edges(
raft::handle_t const& handle,
GraphViewType const& graph_view,
const rmm::device_uvector<typename GraphViewType::vertex_type>& active_majors_in_row,
const rmm::device_uvector<gpu_t>& active_major_gpu_ids,
kaatish marked this conversation as resolved.
Show resolved Hide resolved
EdgeIndexIterator edge_index_first,
rmm::device_uvector<typename GraphViewType::edge_type>&& minor_map,
typename GraphViewType::edge_type indices_per_major,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_degree_offsets);
const rmm::device_uvector<typename GraphViewType::edge_type>& global_degree_offsets,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_adjacency_list_offsets);

/**
* @brief Gather edge list for specified vertices
*
* Collect all the edges that are present in the adjacency lists on the current gpu
*
* @tparam GraphViewType Type of the passed non-owning graph object.
* @tparam prop_t Type of the property associated with the majors.
* @param handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator, and
* handles to various CUDA libraries) to run graph algorithms.
* @param graph_view Non-owning graph object.
* @param active_majors_in_row Device vector containing all the vertex id that are processed by
* gpus in the column communicator
kaatish marked this conversation as resolved.
Show resolved Hide resolved
* @param active_major_property Device vector containing the property values associated by every
* vertex present in active_majors_in_row
* @return A tuple of device vector containing the majors, minors and properties gathered locally
*/
template <typename GraphViewType, typename prop_t>
std::tuple<rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<typename GraphViewType::vertex_type>,
rmm::device_uvector<prop_t>,
rmm::device_uvector<typename GraphViewType::edge_type>>
gather_one_hop_edgelist(
raft::handle_t const& handle,
GraphViewType const& graph_view,
const rmm::device_uvector<typename GraphViewType::vertex_type>& active_majors_in_row,
kaatish marked this conversation as resolved.
Show resolved Hide resolved
const rmm::device_uvector<prop_t>& active_major_property,
const rmm::device_uvector<typename GraphViewType::edge_type>& global_adjacency_list_offsets);

} // namespace detail

Expand Down