-
Notifications
You must be signed in to change notification settings - Fork 28k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-48182][SQL] SQL (java side): Migrate error/warn/info
with variables to structured logging framework
#46450
Conversation
…riables to structured logging framework
|
||
import org.apache.spark.internal.Logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Used in this place:
Lines 116 to 119 in bd896ca
private val logInfo = (msg: String) => getAncestorField[Logger](this, 3, "LOG").info(msg) | |
private val logError = (msg: String, e: Throwable) => | |
getAncestorField[Logger](this, 3, "LOG").error(msg, e) |
@@ -38,7 +39,7 @@ | |||
*/ | |||
public class TSetIpAddressProcessor<I extends Iface> extends TCLIService.Processor<Iface> { | |||
|
|||
private static final Logger LOGGER = LoggerFactory.getLogger(TSetIpAddressProcessor.class.getName()); | |||
private static final Logger LOGGER = LoggerFactory.getLogger(TSetIpAddressProcessor.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -129,9 +132,8 @@ public static Exception stopQuietly(Service service) { | |||
try { | |||
stop(service); | |||
} catch (Exception e) { | |||
LOG.warn("When stopping the service " + service.getName() | |||
+ " : " + e, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think " : " + e
is redundant because there is , e(Throwable)
behind
LOG.info( | ||
"HiveServer2: Background operation thread keepalive time: " + keepAliveTime + " seconds"); | ||
LOG.info("HiveServer2: Background operation thread keepalive time: {} ms", | ||
MDC.of(LogKeys.THREAD_KEEP_ALIVE_TIME$.MODULE$, keepAliveTime * 1000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's turn our time unit
into ms
@@ -193,4 +193,8 @@ static MessageThrowable of(String message, Throwable throwable) { | |||
return new MessageThrowable(message, throwable); | |||
} | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -18,7 +18,7 @@ | |||
|
|||
import java.io.IOException; | |||
|
|||
import org.slf4j.Logger; | |||
import org.apache.spark.internal.Logger; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following code use it
I searched the spark code base
. It seems that the public static void cleanup(Logger log, java.io.Closeable... closeables)
method is not used. Maybe we can delete the following code?
spark/sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceUtils.java
Lines 55 to 67 in d7f69e7
public static void cleanup(Logger log, java.io.Closeable... closeables) { | |
for (java.io.Closeable c : closeables) { | |
if (c != null) { | |
try { | |
c.close(); | |
} catch(IOException e) { | |
if (log != null && log.isDebugEnabled()) { | |
log.debug("Exception in closing " + c, e); | |
} | |
} | |
} | |
} | |
} |
throw new HiveSQLException(msg, "08S01", e); | ||
LOG.error("Error verifying delegation token {}", e, | ||
MDC.of(LogKeys.TOKEN$.MODULE$, delegationToken)); | ||
throw new HiveSQLException("Error verifying delegation token " + delegationToken, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we can store the log entry into msg
and use msg.message
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
" seconds has been exceeded. RUNNING background operations will be shut down", e); | ||
LOG.warn("HIVE_SERVER2_ASYNC_EXEC_SHUTDOWN_TIMEOUT = {} ms has been exceeded. " + | ||
"RUNNING background operations will be shut down", e, | ||
MDC.of(LogKeys.TIMEOUT$.MODULE$, timeout * 1000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: 1000L
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not updated..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh timeout is long type already. Keep the current code is also fine
@@ -69,7 +72,8 @@ public ClassicTableTypeMapping() { | |||
public String[] mapToHiveType(String clientTypeName) { | |||
Collection<String> hiveTableType = clientToHiveMap.get(clientTypeName.toUpperCase()); | |||
if (hiveTableType == null) { | |||
LOG.warn("Not supported client table type " + clientTypeName); | |||
LOG.warn("Not supported client table type {}", | |||
MDC.of(LogKeys.CLIENT_TABLE_TYPE$.MODULE$, clientTypeName)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIt: CLIENT_TABLE_TYPE
is a bit confusing here. Is there a better name for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CLIENT_TABLE_TYPE
-> CLASSIS_TABLE_TYPE
@@ -69,7 +72,8 @@ public ClassicTableTypeMapping() { | |||
public String[] mapToHiveType(String clientTypeName) { | |||
Collection<String> hiveTableType = clientToHiveMap.get(clientTypeName.toUpperCase()); | |||
if (hiveTableType == null) { | |||
LOG.warn("Not supported client table type " + clientTypeName); | |||
LOG.warn("Not supported client table type {}", | |||
MDC.of(LogKeys.CLASSIS_TABLE_TYPE$.MODULE$, clientTypeName)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seem there is a typo CLASSIC
.
Let's simply use TABLE_TYPE
for all the keys in this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Thanks, merging to master |
…riables to structured logging framework ### What changes were proposed in this pull request? The pr aims to 1.migrate `error/warn/info` in module `SQL` with variables to `structured logging framework` for java side. 2.convert all dependencies on `org.slf4j.Logger & org.slf4j.LoggerFactory` to `org.apache.spark.internal.Logger & org.apache.spark.internal.LoggerFactory`, in order to completely `prohibit` importing `org.slf4j.Logger & org.slf4j.LoggerFactory` in java code later. ### Why are the changes needed? To enhance Apache Spark's logging system by implementing structured logging. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - Pass GA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#46450 from panbingkun/sql_java_sl. Authored-by: panbingkun <panbingkun@baidu.com> Signed-off-by: Gengliang Wang <gengliang@apache.org>
What changes were proposed in this pull request?
The pr aims to
1.migrate
error/warn/info
in moduleSQL
with variables tostructured logging framework
for java side.2.convert all dependencies on
org.slf4j.Logger & org.slf4j.LoggerFactory
toorg.apache.spark.internal.Logger & org.apache.spark.internal.LoggerFactory
, in order to completelyprohibit
importingorg.slf4j.Logger & org.slf4j.LoggerFactory
in java code later.Why are the changes needed?
To enhance Apache Spark's logging system by implementing structured logging.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Was this patch authored or co-authored using generative AI tooling?
No.