Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] Fix consistency problem #8896

Merged
merged 2 commits into from
Jul 19, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 1 addition & 6 deletions be/src/exec/pipeline/fragment_executor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -546,12 +546,7 @@ Status FragmentExecutor::_decompose_data_sink_to_operator(RuntimeState* runtime_
if (sender->get_partition_type() == TPartitionType::HASH_PARTITIONED ||
sender->get_partition_type() == TPartitionType::BUCKET_SHUFFLE_HASH_PARTITIONED) {
dest_dop = t_stream_sink.dest_dop;

// UNPARTITIONED mode will be performed if both num of destination and dest dop is 1
// So we only enable pipeline level shuffle when num of destination or dest dop is greater than 1
if (sender->destinations().size() > 1 || dest_dop > 1) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change this logic?

Copy link
Contributor Author

@liuyehcf liuyehcf Jul 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This condition here is redundant, ExchangeSinkOperator's ctor will take both of destionation number and dest dop into consideration.

ExchangeSinkOperator::ExchangeSinkOperator(OperatorFactory* factory, int32_t id, int32_t plan_node_id,
                                           int32_t driver_sequence, const std::shared_ptr<SinkBuffer>& buffer,
                                           TPartitionType::type part_type,
                                           const std::vector<TPlanFragmentDestination>& destinations,
                                           bool is_pipeline_level_shuffle, const int32_t num_shuffles_per_channel,
                                           int32_t sender_id, PlanNodeId dest_node_id,
                                           const std::vector<ExprContext*>& partition_expr_ctxs,
                                           bool enable_exchange_pass_through, FragmentContext* const fragment_ctx,
                                           const std::vector<int32_t>& output_columns)
        : Operator(factory, id, "exchange_sink", plan_node_id, driver_sequence),
          _buffer(buffer),
          _part_type(part_type),
          _destinations(destinations),
          _num_shuffles_per_channel(num_shuffles_per_channel > 0 ? num_shuffles_per_channel : 1),
          _sender_id(sender_id),
          _dest_node_id(dest_node_id),
          _partition_expr_ctxs(partition_expr_ctxs),
          _fragment_ctx(fragment_ctx),
          _output_columns(output_columns) {
...
    _num_shuffles = _channels.size() * _num_shuffles_per_channel;

    _is_pipeline_level_shuffle = is_pipeline_level_shuffle && (_num_shuffles > 1);
}

is_pipeline_level_shuffle = true;
}
is_pipeline_level_shuffle = true;
DCHECK_GT(dest_dop, 0);
}

Expand Down
35 changes: 27 additions & 8 deletions be/src/runtime/data_stream_recvr.cc
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,9 @@ class DataStreamRecvr::SenderQueue {

typedef std::list<ChunkItem> ChunkQueue;
ChunkQueue _chunk_queue;
#ifdef DEBUG
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this variable is only used in debug mode?

bool _is_pipeline_level_shuffle_init = false;
#endif
bool _is_pipeline_level_shuffle = false;
std::vector<bool> _has_chunks_per_driver_sequence;
serde::ProtobufChunkMeta _chunk_meta;
Expand Down Expand Up @@ -471,12 +473,21 @@ Status DataStreamRecvr::SenderQueue::add_chunks(const PTransmitChunkParams& requ
ChunkQueue chunks;
size_t total_chunk_bytes = 0;
faststring uncompressed_buffer;
bool prev_is_pipeline_level_shuffle = _is_pipeline_level_shuffle;

#ifdef DEBUG
{
std::lock_guard<std::mutex> l(_lock);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to keep so many codes only for debugging?

Copy link
Contributor Author

@liuyehcf liuyehcf Jul 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These code is only for DCHECK, the logic of pipeline level shuffle is too implied and complex, and I think adding DCHECK can expose problems as soon as possible. Otherwise, once a bug is introduces, it will be difficult to troubleshoot.

bool prev_is_pipeline_level_shuffle = _is_pipeline_level_shuffle;
_is_pipeline_level_shuffle =
_recvr->_is_pipeline && request.has_is_pipeline_level_shuffle() && request.is_pipeline_level_shuffle();
// _is_pipeline_level_shuffle must be stable after first assignment
DCHECK(!_is_pipeline_level_shuffle_init || (prev_is_pipeline_level_shuffle == _is_pipeline_level_shuffle));
_is_pipeline_level_shuffle_init = true;
}
#else
_is_pipeline_level_shuffle =
_recvr->_is_pipeline && request.has_is_pipeline_level_shuffle() && request.is_pipeline_level_shuffle();
// _is_pipeline_level_shuffle must be stable after first assignment
DCHECK(!_is_pipeline_level_shuffle_init || (prev_is_pipeline_level_shuffle == _is_pipeline_level_shuffle));
_is_pipeline_level_shuffle_init = true;
#endif

if (use_pass_through) {
ChunkUniquePtrVector swap_chunks;
Expand Down Expand Up @@ -604,12 +615,20 @@ Status DataStreamRecvr::SenderQueue::add_chunks_and_keep_order(const PTransmitCh
size_t total_chunk_bytes = 0;
faststring uncompressed_buffer;
ChunkQueue local_chunk_queue;
bool prev_is_pipeline_level_shuffle = _is_pipeline_level_shuffle;
#ifdef DEBUG
{
std::lock_guard<std::mutex> l(_lock);
bool prev_is_pipeline_level_shuffle = _is_pipeline_level_shuffle;
_is_pipeline_level_shuffle =
_recvr->_is_pipeline && request.has_is_pipeline_level_shuffle() && request.is_pipeline_level_shuffle();
// _is_pipeline_level_shuffle must be stable after first assignment
DCHECK(!_is_pipeline_level_shuffle_init || (prev_is_pipeline_level_shuffle == _is_pipeline_level_shuffle));
_is_pipeline_level_shuffle_init = true;
}
#else
_is_pipeline_level_shuffle =
_recvr->_is_pipeline && request.has_is_pipeline_level_shuffle() && request.is_pipeline_level_shuffle();
// _is_pipeline_level_shuffle must be stable after first assignment
DCHECK(!_is_pipeline_level_shuffle_init || (prev_is_pipeline_level_shuffle == _is_pipeline_level_shuffle));
_is_pipeline_level_shuffle_init = true;
#endif

if (use_pass_through) {
ChunkUniquePtrVector swap_chunks;
Expand Down