New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HBASE-23938 : System table hbase:slowlog to store complete slow/large… #1681
Conversation
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the region is temporary unavailable? Will it block the normal requests?
* @param slowLogPayload SlowLogPayload to process | ||
* @return rowKey byte[] | ||
*/ | ||
private static byte[] getRowKey(final TooSlowLog.SlowLogPayload slowLogPayload) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this genereate hot spot on the region?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it can have hotsopts, but we are not expecting the system to be too slow for longer time. And also, when users scan the system table, they would expect to see slow RPC logs in increasing order of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added chore to insert records every 10 min, hence we are not much worried about hotspot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a 10 min window we run the chore - but if there are slow sync happening during that 10 min window - we accumulate it in memory ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I saw this public static final int DEFAULT_SLOW_LOG_RING_BUFFER_SIZE = 256;
So at max it is only 256 elements that we store in that queue. Rest will be removed anyway. At max we have 256 put list at any point of time. Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one Q - this system table if not coming online due to some assignment issues - we don't bother right? Say if the table is not online and while doing the puts we don't get the table - we should not do a WARN for certain time and then stop the connection fetching operatoin itself. Something like file based bucket cache - if there is afailure in writing to cache we keep retrying and if the error count exceeds a threshold we will WARN and no longer cache the data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DEFAULT_SLOW_LOG_RING_BUFFER_SIZE = 256
this is for in-memory ring buffer only. For 10 min window, yes we will keep records in memory until we can insert them together by cron. And the size for the memory queue is 1000. The code is here:
if (isSlowLogTableEnabled) {
EvictingQueue<SlowLogPayload> evictingQueueForTable = EvictingQueue.create(
SYS_TABLE_QUEUE_SIZE);
queueForSysTable = Queues.synchronizedQueue(evictingQueueForTable);
} else {
queueForSysTable = null;
}
this system table if not coming online due to some assignment issues - we don't bother right?
That is true, and yes warn should be present, which is provided here:
} catch (Exception e) {
LOG.warn("Failed to add slow/large log records to hbase:slowlog table.", e);
}
Retries and timeout is also defined here:
Configuration conf = new Configuration(configuration);
// rpc timeout: 20s
conf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, 20000);
// retry count: 5
conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 5);
conf.setInt(HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER, 1);
connection = ConnectionFactory.createConnection(conf);
It won't block any read/write requests but it can block Disruptor RingBuffer requests that are being consumed ( hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/slowlog/LogEventHandler.java Line 74 in 68229c9
|
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
@Apache9 Could you please take a look when you get some time? The recent build test failure is passing well locally. |
Will take a look soon. |
.setColumnFamily( | ||
ColumnFamilyDescriptorBuilder.newBuilder(HConstants.SLOWLOG_INFO_FAMILY) | ||
.setScope(HConstants.REPLICATION_SCOPE_LOCAL) | ||
.setInMemory(true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we need inMemory true ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make the blockCaching = false for this table?
|
||
public void init() throws IOException { | ||
if (!slowlogTableEnabled) { | ||
LOG.info("SlowLogTable is not enabled. Quitting."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The log is a bit confusing. The will look like a table is enabled/disabled. We can say slow request logging to system table is not enable? (Or a better message)
*/ | ||
void addAllLogsToSysTable() { | ||
if (queueForSysTable == null) { | ||
LOG.warn("hbase.regionserver.slowlog.systable.enabled is turned off. Exiting."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the config is turned off, this will log this warn line in every 10 mins right? Dont think we need to log here at all.
i++; | ||
if (i == 100) { | ||
SlowLogTableAccessor.addSlowLogRecords(slowLogPayloads, this.connection); | ||
slowLogPayloads = new ArrayList<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead can do ArrayList#clear?
while (!queueForSysTable.isEmpty()) { | ||
slowLogPayloads.add(queueForSysTable.poll()); | ||
i++; | ||
if (i == 100) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If 100 rows, approx what will be the write req total size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe if region name is quite long, in the worst case, each request might be of ~ 3-4k bytes.
Also, mutate() allows by default 5000 rows in single request and hence, this is reasonably lower no of requests. Size is also not much higher.
*/ | ||
public static void addSlowLogRecords(final List<TooSlowLog.SlowLogPayload> slowLogPayloads, | ||
final Connection connection) { | ||
List<Put> puts = new ArrayList<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Puts list size can be given as slowLogPayloads.size()
} | ||
try { | ||
doPut(connection, puts); | ||
} catch (IOException e) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any chance of non IOE possible? Should we catch Exception. Else it might cause the chore to get terminated if any non IOE comes ever?
/** hbase:slowlog table name - can be enabled | ||
* with config - hbase.regionserver.slowlog.systable.enabled | ||
*/ | ||
public static final TableName SLOW_LOG_TABLE_NAME = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are exposing this Table name for users right? I think we need. Then we should expose the cf and column names also? Sure this is not the class for those expose.
Now we allow all kind of client ops (writes, disable, alter) on this table? I believe in Master branch we allow tables in system NS to be disabled, altered etc. There was a Jira around META table. Will that be ok (Disable this table and keep it that state)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I tried disable and enable of 'hbase:slowlog' locally and both worked fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we exposing this table name to customer? This is a public class. If we expose the name for user to create queries, we might have to expose the column names also? But those can not be done in this class. Any thinking of having a new Public class if needed to expose? If you dont want to expose this table name, we should not keep this in TableName public class
private static void doPut(final Connection connection, final List<Put> puts) | ||
throws IOException { | ||
try (Table table = connection.getTable(TableName.SLOW_LOG_TABLE_NAME)) { | ||
table.put(puts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For some reason this put is not happening for one occasion, we will just throw away those logs. Anyways we are ok to loose some data as we already disabled the WAL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default HBase will have a 60 sec RPC timeout and 35 retries. Means overall it will be > 10 min which is the default Chore interval. We can adjust these configs? 35 retries anyways we can reduce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anoopsjohn on master branch, this retry count is 45?
public static void setServerSideHConnectionRetriesConfig(final Configuration c, final String sn,
final Logger log) {
// TODO: Fix this. Not all connections from server side should have 10 times the retries.
int hcRetries = c.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
// Go big. Multiply by 10. If we can't get to meta after this many retries
// then something seriously wrong.
int serversideMultiplier = c.getInt(HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER,
HConstants.DEFAULT_HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER);
int retries = hcRetries * serversideMultiplier;
c.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, retries);
log.info(sn + " server-side Connection retries=" + retries);
}
Or I am missing some calculation?
Let me get these configs:
conf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, 20000);
conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 5);
conf.setInt(HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER, 1);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I can see that u have reduced the RPC time out and retries. Fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya seems for Server -> Server RPCs we will have default of 15 * 3 = 45 retries.
I can see the client side retries is 15 now not 35. I believe this is changed in master branch. Not sure which jira
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left over some questions and comments. Overall looks good to me. Thanks @virajjasani .
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/slowlog/SlowLogRecorder.java
Show resolved
Hide resolved
} | ||
} | ||
} | ||
doPut(connection, puts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a 10 min window we will do the puts . Can it really hotspot the region? Also at the Rpc handler layer should we give the lowest priority? System tables like META had high priority i belive - need to check the code. Should we lower for this table?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure that should be fine, but since we already have (WAL disabled + block cache disabled). Do you think it is fine to not touch Rpc handler level priority? We already don't have tight consistency and resiliency for this system table, so there are already chances of losing some data :)
Thought? I am fine with lowering the priority anyways.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above is a valid Q and nice observation. The priority of the req been calculated based on the System ns. As this table is in 'hbase' NS it will get highest priority. Its not about WAL write or cache usage. The RPC will be handled by the priority handler threads and Q. I believe we can set it at the individual req level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, taken care of in the latest commit.
hbase-server/src/main/java/org/apache/hadoop/hbase/master/slowlog/SlowLogMasterService.java
Show resolved
Hide resolved
List<Put> puts = new ArrayList<>(slowLogPayloads.size()); | ||
for (TooSlowLog.SlowLogPayload slowLogPayload : slowLogPayloads) { | ||
final byte[] rowKey = getRowKey(slowLogPayload); | ||
final Put put = new Put(rowKey).setDurability(Durability.SKIP_WAL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good.
* @param slowLogPayload SlowLogPayload to process | ||
* @return rowKey byte[] | ||
*/ | ||
private static byte[] getRowKey(final TooSlowLog.SlowLogPayload slowLogPayload) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a 10 min window we run the chore - but if there are slow sync happening during that 10 min window - we accumulate it in memory ?
* @param slowLogPayload SlowLogPayload to process | ||
* @return rowKey byte[] | ||
*/ | ||
private static byte[] getRowKey(final TooSlowLog.SlowLogPayload slowLogPayload) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I saw this public static final int DEFAULT_SLOW_LOG_RING_BUFFER_SIZE = 256;
So at max it is only 256 elements that we store in that queue. Rest will be removed anyway. At max we have 256 put list at any point of time. Right?
* @param slowLogPayload SlowLogPayload to process | ||
* @return rowKey byte[] | ||
*/ | ||
private static byte[] getRowKey(final TooSlowLog.SlowLogPayload slowLogPayload) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one Q - this system table if not coming online due to some assignment issues - we don't bother right? Say if the table is not online and while doing the puts we don't get the table - we should not do a WARN for certain time and then stop the connection fetching operatoin itself. Something like file based bucket cache - if there is afailure in writing to cache we keep retrying and if the error count exceeds a threshold we will WARN and no longer cache the data.
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
/** hbase:slowlog table name - can be enabled | ||
* with config - hbase.regionserver.slowlog.systable.enabled | ||
*/ | ||
public static final TableName SLOW_LOG_TABLE_NAME = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we exposing this table name to customer? This is a public class. If we expose the name for user to create queries, we might have to expose the column names also? But those can not be done in this class. Any thinking of having a new Public class if needed to expose? If you dont want to expose this table name, we should not keep this in TableName public class
queueForRingBuffer = Queues.synchronizedQueue(evictingQueue); | ||
this.isSlowLogTableEnabled = isSlowLogTableEnabled; | ||
if (isSlowLogTableEnabled) { | ||
EvictingQueue<SlowLogPayload> evictingQueueForTable = EvictingQueue.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When system table logging is enabled, we have 2 queue. This new one will always have 1000 max size where as the 'queueForRingBuffer' will have a default of 256 only. Anyways we will occupy more heap memory upto 1000 messages. Why we should reduce the max numbers in queueForRingBuffer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a way to avoid this 2 queues but keep it single?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One Q is literally managed by user only and it's purpose is to server online slowlogs from memory. Another Q is for cron to insert records in system table in batch, hence both should not be same. Even if user clears the Q, the one intended is in-memory ring buffer Q. If user has opted for system table, the other Q is totally managed internally and not upto user to manage. User might also opt for lower size for config hbase.regionserver.slowlog.ringbuffer.size
to say 100/50. But there is no guarantee that within 10 min of cron run, the no of slow RPC calls would be less than 50, and hence better to have another Q for systable maintenance.
@@ -53,12 +55,28 @@ | |||
class LogEventHandler implements EventHandler<RingBufferEnvelope> { | |||
|
|||
private static final Logger LOG = LoggerFactory.getLogger(LogEventHandler.class); | |||
private static final int SYS_TABLE_QUEUE_SIZE = 1000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You want a config for this like 'hbase.regionserver.slowlog.ringbuffer.size'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That will conflict with the existing one. We have hbase.regionserver.slowlog.ringbuffer.size
but this is only related to in memory ring buffer and has nothing to do with system table. If you believe we can increase this size, we can do it. It is rough estimate that within 10 min, we can have slow/large logs count within 1000.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My ask was do u want to put a new config similar to the ring buffer's rather than hard coded 1000.
After discussion, it looks like the confusion mostly coming because of these Q sizes and its value is not really going with its intent. Ideally the ringbuffer Q is supposed to hold the slow logs for much longer time. The table Q is cleared/reduced in every 10 mins. So ideally the size of the table Q <= ring buffer Q size. Then all confusions will go.
I would suggest we do this way.
We can have a new config like 'hbase.regionserver.slowlog.ringbuffer.size' for the table Q size. Its default value can be the value of hbase.regionserver.slowlog.ringbuffer.size' itself. This 2 Q system will allow user to even have a tableQ size > ringbuffer Q size by tuning the config.
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/slowlog/LogEventHandler.java
Show resolved
Hide resolved
@@ -160,7 +183,7 @@ boolean clearSlowLogs() { | |||
if (LOG.isDebugEnabled()) { | |||
LOG.debug("Received request to clean up online slowlog buffer.."); | |||
} | |||
queue.clear(); | |||
queueForRingBuffer.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be done when Admin issues clearSlowLogsResponses req. Still it wont reduce the RS heap usage as we have 2 queues. Is that what we intent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is intended since another Q is quite internal for systable insertion by cron.
private static void doPut(final Connection connection, final List<Put> puts) | ||
throws IOException { | ||
try (Table table = connection.getTable(TableName.SLOW_LOG_TABLE_NAME)) { | ||
table.put(puts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I can see that u have reduced the RPC time out and retries. Fine.
private static void doPut(final Connection connection, final List<Put> puts) | ||
throws IOException { | ||
try (Table table = connection.getTable(TableName.SLOW_LOG_TABLE_NAME)) { | ||
table.put(puts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya seems for Server -> Server RPCs we will have default of 15 * 3 = 45 retries.
I can see the client side retries is 15 now not 35. I believe this is changed in master branch. Not sure which jira
} | ||
} | ||
} | ||
doPut(connection, puts); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above is a valid Q and nice observation. The priority of the req been calculated based on the System ns. As this table is in 'hbase' NS it will get highest priority. Its not about WAL write or cache usage. The RPC will be handled by the priority handler threads and Q. I believe we can set it at the individual req level.
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
💔 -1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
@@ -532,7 +533,7 @@ static int calcPriority(int priority, TableName tableName) { | |||
} | |||
|
|||
static int getPriority(TableName tableName) { | |||
if (tableName.isSystemTable()) { | |||
if (tableName.isSystemTable() && !tableName.equals(SlowLogTableAccessor.SLOW_LOG_TABLE_NAME)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good. This is where we set the priority.
tn != null && tn.isSystemTable() ? HConstants.SYSTEMTABLE_QOS : HConstants.NORMAL_QOS); | ||
int priority = HConstants.NORMAL_QOS; | ||
if (tn != null && tn.isSystemTable() | ||
&& !tn.equals(SlowLogTableAccessor.SLOW_LOG_TABLE_NAME)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be move this to a util method some where. Like isMeta that we have. We can say as isLogTable()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set the priority on the write req directly (Put) rather than having extra check like this at different places? That will work right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, okk sure let me update priority with Put request. So in that case, we don't need this change and even the above class change tableName.isSystemTable() && !tableName.equals(SlowLogTableAccessor.SLOW_LOG_TABLE_NAME)
.
Thanks @ramkrish86 @anoopsjohn
} | ||
} | ||
|
||
private static synchronized void createConnection(Configuration configuration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is synchronized. Good. Anyway mostly it will be only one thread executing it I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct, only one thread, however since this is for creating connection, thought of keeping synchronized.
conf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, 20000); | ||
// retry count: 5 | ||
conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 5); | ||
conf.setInt(HConstants.HBASE_CLIENT_SERVERSIDE_RETRIES_MULTIPLIER, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice.
String hashcode = String.valueOf(slowLogPayload.hashCode()); | ||
String lastFiveDig = | ||
hashcode.substring((hashcode.length() > 5) ? (hashcode.length() - 5) : 0); | ||
if (lastFiveDig.startsWith("-")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we going to expose this to the user incase he needs to query based on row key? How are we planning to expose it? Some doc ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, we will have doc but on high level, this table is going to be scanned only by end user, maybe with multiple ColumnValueFilters but we are not expecting get queries with rowkey since users are interested to know what are the records (e.g slow logs on table meta done by user xyz etc), then don't need to know rowkey.
while (!queueForSysTable.isEmpty()) { | ||
slowLogPayloads.add(queueForSysTable.poll()); | ||
i++; | ||
if (i == 100) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to move this to constant. nit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. with some minor comments. Good one @virajjasani .
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
🎊 +1 overall
This message was automatically generated. |
#1681) Signed-off-by: Duo Zhang <zhangduo@apache.org> Signed-off-by: Anoop Sam John <anoopsamjohn@apache.org> Signed-off-by: ramkrish86 <ramkrishna@apache.org>
#1681) Signed-off-by: Duo Zhang <zhangduo@apache.org> Signed-off-by: Anoop Sam John <anoopsamjohn@apache.org> Signed-off-by: ramkrish86 <ramkrishna@apache.org>
apache#1681) Signed-off-by: Duo Zhang <zhangduo@apache.org> Signed-off-by: Anoop Sam John <anoopsamjohn@apache.org> Signed-off-by: ramkrish86 <ramkrishna@apache.org>
… RPC logs