Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use not deprecated console_bridge macros and undefine the deprecated ones #1149

Merged
merged 1 commit into from Aug 28, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
16 changes: 14 additions & 2 deletions tools/rosbag_storage/include/rosbag/bag.h
Expand Up @@ -61,6 +61,18 @@
#include <boost/iterator/iterator_facade.hpp>

#include "console_bridge/console.h"
#if defined logDebug
# undef logDebug
#endif
#if defined logInform
# undef logInform
#endif
#if defined logWarn
# undef logWarn
#endif
#if defined logError
# undef logError
#endif

namespace rosbag {

Expand Down Expand Up @@ -569,7 +581,7 @@ void Bag::doWrite(std::string const& topic, ros::Time const& time, T const& msg,

// Check if we want to stop this chunk
uint32_t chunk_size = getChunkOffset();
logDebug(" curr_chunk_size=%d (threshold=%d)", chunk_size, chunk_threshold_);
CONSOLE_BRIDGE_logDebug(" curr_chunk_size=%d (threshold=%d)", chunk_size, chunk_threshold_);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, this doesn't work on Indigo. From what I can tell, /usr/include/console_bridge/console.h doesn't have the CONSOLE_BRIDGE_log* macros on Indigo. I have an upcoming patch which I'll push here which defines them for the rosbag/bag.h header file. It's not a generic solution (like having it in console_bridge/bridge.h would be), but I'm not sure if we want to introduce that this late into Indigo. Let me know what you think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch is targeting lunar-devel and there is no reason to backport this into Indigo. Therefore I don't think the second commit is necessary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, sorry, I missed that. I got confused because the initial bug report was for Indigo.

Alright, so for Lunar, I agree with your original patch (I'll revert mine off of this branch). However, that still leaves the original bug report about Indigo. It sounds like you want to just leave this be on Indigo; is that the case? If so, once we merge this patch I'll close the original bug report saying that it won't be fixed back in Indigo, but will be fixed going forward. Sound reasonable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like you want to just leave this be on Indigo; is that the case?

Yes, the version of console_bridge in Indigo doesn't have the namespaces macros. Therefore I don't think every higher level package needs to work around this upstream problem.

The patch could be backported to Kinetic if the console_bridge version on the platforms targeted by that support this.

if (chunk_size > chunk_threshold_) {
// Empty the outgoing chunk
stopWritingChunk();
Expand Down Expand Up @@ -604,7 +616,7 @@ void Bag::writeMessageDataRecord(uint32_t conn_id, ros::Time const& time, T cons
seek(0, std::ios::end);
file_size_ = file_.getOffset();

logDebug("Writing MSG_DATA [%llu:%d]: conn=%d sec=%d nsec=%d data_len=%d",
CONSOLE_BRIDGE_logDebug("Writing MSG_DATA [%llu:%d]: conn=%d sec=%d nsec=%d data_len=%d",
(unsigned long long) file_.getOffset(), getChunkOffset(), conn_id, time.sec, time.nsec, msg_ser_len);

writeHeader(header);
Expand Down
58 changes: 29 additions & 29 deletions tools/rosbag_storage/src/bag.cpp
Expand Up @@ -213,7 +213,7 @@ void Bag::setCompression(CompressionType compression) {
void Bag::writeVersion() {
string version = string("#ROSBAG V") + VERSION + string("\n");

logDebug("Writing VERSION [%llu]: %s", (unsigned long long) file_.getOffset(), version.c_str());
CONSOLE_BRIDGE_logDebug("Writing VERSION [%llu]: %s", (unsigned long long) file_.getOffset(), version.c_str());

version_ = 200;

Expand All @@ -236,7 +236,7 @@ void Bag::readVersion() {

version_ = version_major * 100 + version_minor;

logDebug("Read VERSION: version=%d", version_);
CONSOLE_BRIDGE_logDebug("Read VERSION: version=%d", version_);
}

uint32_t Bag::getMajorVersion() const { return version_ / 100; }
Expand Down Expand Up @@ -325,7 +325,7 @@ void Bag::startReadingVersion102() {
multiset<IndexEntry> const& index = i->second;
IndexEntry const& first_entry = *index.begin();

logDebug("Reading message definition for connection %d at %llu", i->first, (unsigned long long) first_entry.chunk_pos);
CONSOLE_BRIDGE_logDebug("Reading message definition for connection %d at %llu", i->first, (unsigned long long) first_entry.chunk_pos);

seek(first_entry.chunk_pos);

Expand All @@ -339,7 +339,7 @@ void Bag::writeFileHeaderRecord() {
connection_count_ = connections_.size();
chunk_count_ = chunks_.size();

logDebug("Writing FILE_HEADER [%llu]: index_pos=%llu connection_count=%d chunk_count=%d",
CONSOLE_BRIDGE_logDebug("Writing FILE_HEADER [%llu]: index_pos=%llu connection_count=%d chunk_count=%d",
(unsigned long long) file_.getOffset(), (unsigned long long) index_data_pos_, connection_count_, chunk_count_);

// Write file header record
Expand Down Expand Up @@ -390,7 +390,7 @@ void Bag::readFileHeaderRecord() {
readField(fields, CHUNK_COUNT_FIELD_NAME, true, &chunk_count_);
}

logDebug("Read FILE_HEADER: index_pos=%llu connection_count=%d chunk_count=%d",
CONSOLE_BRIDGE_logDebug("Read FILE_HEADER: index_pos=%llu connection_count=%d chunk_count=%d",
(unsigned long long) index_data_pos_, connection_count_, chunk_count_);

// Skip the data section (just padding)
Expand Down Expand Up @@ -460,7 +460,7 @@ void Bag::writeChunkHeader(CompressionType compression, uint32_t compressed_size
chunk_header.compressed_size = compressed_size;
chunk_header.uncompressed_size = uncompressed_size;

logDebug("Writing CHUNK [%llu]: compression=%s compressed=%d uncompressed=%d",
CONSOLE_BRIDGE_logDebug("Writing CHUNK [%llu]: compression=%s compressed=%d uncompressed=%d",
(unsigned long long) file_.getOffset(), chunk_header.compression.c_str(), chunk_header.compressed_size, chunk_header.uncompressed_size);

M_string header;
Expand All @@ -485,7 +485,7 @@ void Bag::readChunkHeader(ChunkHeader& chunk_header) const {
readField(fields, COMPRESSION_FIELD_NAME, true, chunk_header.compression);
readField(fields, SIZE_FIELD_NAME, true, &chunk_header.uncompressed_size);

logDebug("Read CHUNK: compression=%s size=%d uncompressed=%d (%f)", chunk_header.compression.c_str(), chunk_header.compressed_size, chunk_header.uncompressed_size, 100 * ((double) chunk_header.compressed_size) / chunk_header.uncompressed_size);
CONSOLE_BRIDGE_logDebug("Read CHUNK: compression=%s size=%d uncompressed=%d (%f)", chunk_header.compression.c_str(), chunk_header.compressed_size, chunk_header.uncompressed_size, 100 * ((double) chunk_header.compressed_size) / chunk_header.uncompressed_size);
}

// Index records
Expand All @@ -506,15 +506,15 @@ void Bag::writeIndexRecords() {

writeDataLength(index_size * 12);

logDebug("Writing INDEX_DATA: connection=%d ver=%d count=%d", connection_id, INDEX_VERSION, index_size);
CONSOLE_BRIDGE_logDebug("Writing INDEX_DATA: connection=%d ver=%d count=%d", connection_id, INDEX_VERSION, index_size);

// Write the index record data (pairs of timestamp and position in file)
foreach(IndexEntry const& e, index) {
write((char*) &e.time.sec, 4);
write((char*) &e.time.nsec, 4);
write((char*) &e.offset, 4);

logDebug(" - %d.%d: %d", e.time.sec, e.time.nsec, e.offset);
CONSOLE_BRIDGE_logDebug(" - %d.%d: %d", e.time.sec, e.time.nsec, e.offset);
}
}
}
Expand All @@ -536,7 +536,7 @@ void Bag::readTopicIndexRecord102() {
readField(fields, TOPIC_FIELD_NAME, true, topic);
readField(fields, COUNT_FIELD_NAME, true, &count);

logDebug("Read INDEX_DATA: ver=%d topic=%s count=%d", index_version, topic.c_str(), count);
CONSOLE_BRIDGE_logDebug("Read INDEX_DATA: ver=%d topic=%s count=%d", index_version, topic.c_str(), count);

if (index_version != 0)
throw BagFormatException((format("Unsupported INDEX_DATA version: %1%") % index_version).str());
Expand All @@ -546,7 +546,7 @@ void Bag::readTopicIndexRecord102() {
if (topic_conn_id_iter == topic_connection_ids_.end()) {
connection_id = connections_.size();

logDebug("Creating connection: id=%d topic=%s", connection_id, topic.c_str());
CONSOLE_BRIDGE_logDebug("Creating connection: id=%d topic=%s", connection_id, topic.c_str());
ConnectionInfo* connection_info = new ConnectionInfo();
connection_info->id = connection_id;
connection_info->topic = topic;
Expand All @@ -569,11 +569,11 @@ void Bag::readTopicIndexRecord102() {
index_entry.time = Time(sec, nsec);
index_entry.offset = 0;

logDebug(" - %d.%d: %llu", sec, nsec, (unsigned long long) index_entry.chunk_pos);
CONSOLE_BRIDGE_logDebug(" - %d.%d: %llu", sec, nsec, (unsigned long long) index_entry.chunk_pos);

if (index_entry.time < ros::TIME_MIN || index_entry.time > ros::TIME_MAX)
{
logError("Index entry for topic %s contains invalid time.", topic.c_str());
CONSOLE_BRIDGE_logError("Index entry for topic %s contains invalid time.", topic.c_str());
} else
{
connection_index.insert(connection_index.end(), index_entry);
Expand All @@ -598,7 +598,7 @@ void Bag::readConnectionIndexRecord200() {
readField(fields, CONNECTION_FIELD_NAME, true, &connection_id);
readField(fields, COUNT_FIELD_NAME, true, &count);

logDebug("Read INDEX_DATA: ver=%d connection=%d count=%d", index_version, connection_id, count);
CONSOLE_BRIDGE_logDebug("Read INDEX_DATA: ver=%d connection=%d count=%d", index_version, connection_id, count);

if (index_version != 1)
throw BagFormatException((format("Unsupported INDEX_DATA version: %1%") % index_version).str());
Expand All @@ -617,11 +617,11 @@ void Bag::readConnectionIndexRecord200() {
read((char*) &index_entry.offset, 4);
index_entry.time = Time(sec, nsec);

logDebug(" - %d.%d: %llu+%d", sec, nsec, (unsigned long long) index_entry.chunk_pos, index_entry.offset);
CONSOLE_BRIDGE_logDebug(" - %d.%d: %llu+%d", sec, nsec, (unsigned long long) index_entry.chunk_pos, index_entry.offset);

if (index_entry.time < ros::TIME_MIN || index_entry.time > ros::TIME_MAX)
{
logError("Index entry for topic %s contains invalid time. This message will not be loaded.", connections_[connection_id]->topic.c_str());
CONSOLE_BRIDGE_logError("Index entry for topic %s contains invalid time. This message will not be loaded.", connections_[connection_id]->topic.c_str());
} else
{
connection_index.insert(connection_index.end(), index_entry);
Expand All @@ -639,7 +639,7 @@ void Bag::writeConnectionRecords() {
}

void Bag::writeConnectionRecord(ConnectionInfo const* connection_info) {
logDebug("Writing CONNECTION [%llu:%d]: topic=%s id=%d",
CONSOLE_BRIDGE_logDebug("Writing CONNECTION [%llu:%d]: topic=%s id=%d",
(unsigned long long) file_.getOffset(), getChunkOffset(), connection_info->topic.c_str(), connection_info->id);

M_string header;
Expand Down Expand Up @@ -693,7 +693,7 @@ void Bag::readConnectionRecord() {
connection_info->md5sum = (*connection_info->header)["md5sum"];
connections_[id] = connection_info;

logDebug("Read CONNECTION: topic=%s id=%d", topic.c_str(), id);
CONSOLE_BRIDGE_logDebug("Read CONNECTION: topic=%s id=%d", topic.c_str(), id);
}
}

Expand All @@ -719,7 +719,7 @@ void Bag::readMessageDefinitionRecord102() {
if (topic_conn_id_iter == topic_connection_ids_.end()) {
uint32_t id = connections_.size();

logDebug("Creating connection: topic=%s md5sum=%s datatype=%s", topic.c_str(), md5sum.c_str(), datatype.c_str());
CONSOLE_BRIDGE_logDebug("Creating connection: topic=%s md5sum=%s datatype=%s", topic.c_str(), md5sum.c_str(), datatype.c_str());
connection_info = new ConnectionInfo();
connection_info->id = id;
connection_info->topic = topic;
Expand All @@ -738,7 +738,7 @@ void Bag::readMessageDefinitionRecord102() {
(*connection_info->header)["md5sum"] = connection_info->md5sum;
(*connection_info->header)["message_definition"] = connection_info->msg_def;

logDebug("Read MSG_DEF: topic=%s md5sum=%s datatype=%s", topic.c_str(), md5sum.c_str(), datatype.c_str());
CONSOLE_BRIDGE_logDebug("Read MSG_DEF: topic=%s md5sum=%s datatype=%s", topic.c_str(), md5sum.c_str(), datatype.c_str());
}

void Bag::decompressChunk(uint64_t chunk_pos) const {
Expand Down Expand Up @@ -773,7 +773,7 @@ void Bag::decompressChunk(uint64_t chunk_pos) const {
}

void Bag::readMessageDataRecord102(uint64_t offset, ros::Header& header) const {
logDebug("readMessageDataRecord: offset=%llu", (unsigned long long) offset);
CONSOLE_BRIDGE_logDebug("readMessageDataRecord: offset=%llu", (unsigned long long) offset);

seek(offset);

Expand All @@ -799,7 +799,7 @@ void Bag::decompressRawChunk(ChunkHeader const& chunk_header) const {
assert(chunk_header.compression == COMPRESSION_NONE);
assert(chunk_header.compressed_size == chunk_header.uncompressed_size);

logDebug("compressed_size: %d uncompressed_size: %d", chunk_header.compressed_size, chunk_header.uncompressed_size);
CONSOLE_BRIDGE_logDebug("compressed_size: %d uncompressed_size: %d", chunk_header.compressed_size, chunk_header.uncompressed_size);

decompress_buffer_.setSize(chunk_header.compressed_size);
file_.read((char*) decompress_buffer_.getData(), chunk_header.compressed_size);
Expand All @@ -812,7 +812,7 @@ void Bag::decompressBz2Chunk(ChunkHeader const& chunk_header) const {

CompressionType compression = compression::BZ2;

logDebug("compressed_size: %d uncompressed_size: %d", chunk_header.compressed_size, chunk_header.uncompressed_size);
CONSOLE_BRIDGE_logDebug("compressed_size: %d uncompressed_size: %d", chunk_header.compressed_size, chunk_header.uncompressed_size);

chunk_buffer_.setSize(chunk_header.compressed_size);
file_.read((char*) chunk_buffer_.getData(), chunk_header.compressed_size);
Expand All @@ -828,7 +828,7 @@ void Bag::decompressLz4Chunk(ChunkHeader const& chunk_header) const {

CompressionType compression = compression::LZ4;

logDebug("lz4 compressed_size: %d uncompressed_size: %d",
CONSOLE_BRIDGE_logDebug("lz4 compressed_size: %d uncompressed_size: %d",
chunk_header.compressed_size, chunk_header.uncompressed_size);

chunk_buffer_.setSize(chunk_header.compressed_size);
Expand Down Expand Up @@ -889,7 +889,7 @@ void Bag::writeChunkInfoRecords() {
header[END_TIME_FIELD_NAME] = toHeaderString(&chunk_info.end_time);
header[COUNT_FIELD_NAME] = toHeaderString(&chunk_connection_count);

logDebug("Writing CHUNK_INFO [%llu]: ver=%d pos=%llu start=%d.%d end=%d.%d",
CONSOLE_BRIDGE_logDebug("Writing CHUNK_INFO [%llu]: ver=%d pos=%llu start=%d.%d end=%d.%d",
(unsigned long long) file_.getOffset(), CHUNK_INFO_VERSION, (unsigned long long) chunk_info.pos,
chunk_info.start_time.sec, chunk_info.start_time.nsec,
chunk_info.end_time.sec, chunk_info.end_time.nsec);
Expand All @@ -906,7 +906,7 @@ void Bag::writeChunkInfoRecords() {
write((char*) &connection_id, 4);
write((char*) &count, 4);

logDebug(" - %d: %d", connection_id, count);
CONSOLE_BRIDGE_logDebug(" - %d: %d", connection_id, count);
}
}
}
Expand Down Expand Up @@ -935,7 +935,7 @@ void Bag::readChunkInfoRecord() {
uint32_t chunk_connection_count = 0;
readField(fields, COUNT_FIELD_NAME, true, &chunk_connection_count);

logDebug("Read CHUNK_INFO: chunk_pos=%llu connection_count=%d start=%d.%d end=%d.%d",
CONSOLE_BRIDGE_logDebug("Read CHUNK_INFO: chunk_pos=%llu connection_count=%d start=%d.%d end=%d.%d",
(unsigned long long) chunk_info.pos, chunk_connection_count,
chunk_info.start_time.sec, chunk_info.start_time.nsec,
chunk_info.end_time.sec, chunk_info.end_time.nsec);
Expand All @@ -946,7 +946,7 @@ void Bag::readChunkInfoRecord() {
read((char*) &connection_id, 4);
read((char*) &connection_count, 4);

logDebug(" %d: %d messages", connection_id, connection_count);
CONSOLE_BRIDGE_logDebug(" %d: %d messages", connection_id, connection_count);

chunk_info.connection_counts[connection_id] = connection_count;
}
Expand Down Expand Up @@ -1028,7 +1028,7 @@ void Bag::readMessageDataHeaderFromBuffer(Buffer& buffer, uint32_t offset, ros::
total_bytes_read = 0;
uint8_t op = 0xFF;
do {
logDebug("reading header from buffer: offset=%d", offset);
CONSOLE_BRIDGE_logDebug("reading header from buffer: offset=%d", offset);
uint32_t bytes_read;
readHeaderFromBuffer(*current_buffer_, offset, header, data_size, bytes_read);

Expand Down