Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMHACK] [Intermittent] Error thrown while shutting down apim analytics server #1891

Closed
jsonds opened this issue Nov 28, 2017 · 7 comments
Closed

Comments

@jsonds
Copy link

jsonds commented Nov 28, 2017

Description:
Configured apim with analytics following the quick setup [1] and after some time shutdown apim pack and then the analytics pack.

Observed the below exception thrown from the apim analytics pack

[2017-11-28 17:00:37,595] ERROR {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService} -  Error while executing query :         INSERT OVERWRITE table invalidLoginAttemptWeekly        SELECT            temp.agent AS agent,            SUM(temp.agentCount) AS totalAgentCount,            temp.byWeek AS week,            getWeekStartingTime(temp.byYear, temp.byMonth, temp.byWeek) AS _timestamp        FROM (            SELECT                tenantID AS agent,                getWeek(_timestamp) AS byWeek,                getYear(_timestamp) AS byYear,                getMonth(_timestamp) AS byMonth,                first(InvalidLoginCount) as agentCount            FROM invalidLoginAttemptDaily            GROUP BY tenantID, _timestamp            ORDER BY _timestamp        )temp        GROUP BY temp.agent, temp.byYear, temp.byMonth, temp.byWeek        ORDER BY temp.byYear, temp.byMonth, temp.byWeek
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException: Exception in executing query INSERT OVERWRITE table invalidLoginAttemptWeekly        SELECT            temp.agent AS agent,            SUM(temp.agentCount) AS totalAgentCount,            temp.byWeek AS week,            getWeekStartingTime(temp.byYear, temp.byMonth, temp.byWeek) AS _timestamp        FROM (            SELECT                tenantID AS agent,                getWeek(_timestamp) AS byWeek,                getYear(_timestamp) AS byYear,                getMonth(_timestamp) AS byMonth,                first(InvalidLoginCount) as agentCount            FROM invalidLoginAttemptDaily            GROUP BY tenantID, _timestamp            ORDER BY _timestamp        )temp        GROUP BY temp.agent, temp.byYear, temp.byMonth, temp.byWeek        ORDER BY temp.byYear, temp.byMonth, temp.byWeek
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:918)
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:875)
	at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
	at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
	at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
	at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange rangepartitioning(aggOrder#136781 ASC,aggOrder#136782 ASC,week#136758 ASC,200), None
+- ConvertToSafe
   +- TungstenAggregate(key=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776], functions=[(sum(cast(agentCount#136779 as bigint)),mode=Final,isDistinct=false)], output=[agent#136756,totalAgentCount#136757L,week#136758,_timestamp#136759,aggOrder#136781,aggOrder#136782])
      +- TungstenExchange hashpartitioning(agent#136775,byYear#136777,byMonth#136778,byWeek#136776,200), None
         +- TungstenAggregate(key=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776], functions=[(sum(cast(agentCount#136779 as bigint)),mode=Partial,isDistinct=false)], output=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776,sum#136795L])
            +- Project [agent#136775,byWeek#136776,byYear#136777,byMonth#136778,agentCount#136779]
               +- Sort [aggOrder#136780L ASC], true, 0
                  +- ConvertToUnsafe
                     +- Exchange rangepartitioning(aggOrder#136780L ASC,200), None
                        +- ConvertToSafe
                           +- TungstenAggregate(key=[tenantID#136671,_timestamp#136673L], functions=[(first(InvalidLoginCount#136672)(),mode=Final,isDistinct=false)], output=[agent#136775,byWeek#136776,byYear#136777,byMonth#136778,agentCount#136779,aggOrder#136780L])
                              +- TungstenExchange hashpartitioning(tenantID#136671,_timestamp#136673L,200), None
                                 +- TungstenAggregate(key=[tenantID#136671,_timestamp#136673L], functions=[(first(InvalidLoginCount#136672)(),mode=Partial,isDistinct=false)], output=[tenantID#136671,_timestamp#136673L,first#136792,valueSet#136793])
                                    +- Scan org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation@1fb958b5[tenantID#136671,InvalidLoginCount#136672,_timestamp#136673L] 

	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
	at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.ConvertToUnsafe.doExecute(rowFormatConverters.scala:38)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.Sort.doExecute(Sort.scala:64)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.datasources.InsertIntoDataSource.run(InsertIntoDataSource.scala:39)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:914)
	... 11 more
Caused by: org.apache.spark.SparkException: Job 168093 cancelled because SparkContext was shut down
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
	at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
	at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
	at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1801)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:264)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:126)
	at org.apache.spark.sql.execution.Exchange.prepareShuffleDependency(Exchange.scala:179)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:254)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:248)
	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
	... 48 more
[2017-11-28 17:00:37,608] ERROR {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while executing the scheduled task for the script: APIM_LOGANALYZER_SCRIPT
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException: Exception in executing query INSERT OVERWRITE table invalidLoginAttemptWeekly        SELECT            temp.agent AS agent,            SUM(temp.agentCount) AS totalAgentCount,            temp.byWeek AS week,            getWeekStartingTime(temp.byYear, temp.byMonth, temp.byWeek) AS _timestamp        FROM (            SELECT                tenantID AS agent,                getWeek(_timestamp) AS byWeek,                getYear(_timestamp) AS byYear,                getMonth(_timestamp) AS byMonth,                first(InvalidLoginCount) as agentCount            FROM invalidLoginAttemptDaily            GROUP BY tenantID, _timestamp            ORDER BY _timestamp        )temp        GROUP BY temp.agent, temp.byYear, temp.byMonth, temp.byWeek        ORDER BY temp.byYear, temp.byMonth, temp.byWeek
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:918)
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:875)
	at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
	at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
	at org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)
	at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
	at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange rangepartitioning(aggOrder#136781 ASC,aggOrder#136782 ASC,week#136758 ASC,200), None
+- ConvertToSafe
   +- TungstenAggregate(key=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776], functions=[(sum(cast(agentCount#136779 as bigint)),mode=Final,isDistinct=false)], output=[agent#136756,totalAgentCount#136757L,week#136758,_timestamp#136759,aggOrder#136781,aggOrder#136782])
      +- TungstenExchange hashpartitioning(agent#136775,byYear#136777,byMonth#136778,byWeek#136776,200), None
         +- TungstenAggregate(key=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776], functions=[(sum(cast(agentCount#136779 as bigint)),mode=Partial,isDistinct=false)], output=[agent#136775,byYear#136777,byMonth#136778,byWeek#136776,sum#136795L])
            +- Project [agent#136775,byWeek#136776,byYear#136777,byMonth#136778,agentCount#136779]
               +- Sort [aggOrder#136780L ASC], true, 0
                  +- ConvertToUnsafe
                     +- Exchange rangepartitioning(aggOrder#136780L ASC,200), None
                        +- ConvertToSafe
                           +- TungstenAggregate(key=[tenantID#136671,_timestamp#136673L], functions=[(first(InvalidLoginCount#136672)(),mode=Final,isDistinct=false)], output=[agent#136775,byWeek#136776,byYear#136777,byMonth#136778,agentCount#136779,aggOrder#136780L])
                              +- TungstenExchange hashpartitioning(tenantID#136671,_timestamp#136673L,200), None
                                 +- TungstenAggregate(key=[tenantID#136671,_timestamp#136673L], functions=[(first(InvalidLoginCount#136672)(),mode=Partial,isDistinct=false)], output=[tenantID#136671,_timestamp#136673L,first#136792,valueSet#136793])
                                    +- Scan org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation@1fb958b5[tenantID#136671,InvalidLoginCount#136672,_timestamp#136673L] 

	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
	at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.ConvertToUnsafe.doExecute(rowFormatConverters.scala:38)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.Sort.doExecute(Sort.scala:64)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.Project.doExecute(basicOperators.scala:46)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.datasources.InsertIntoDataSource.run(InsertIntoDataSource.scala:39)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
	at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
	at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
	at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:914)
	... 11 more
Caused by: org.apache.spark.SparkException: Job 168093 cancelled because SparkContext was shut down
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:806)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:804)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:804)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1658)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1581)
	at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1219)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:1739)
	at org.apache.spark.SparkContext$$anonfun$3.apply$mcV$sp(SparkContext.scala:596)
	at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1801)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
	at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:264)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:126)
	at org.apache.spark.sql.execution.Exchange.prepareShuffleDependency(Exchange.scala:179)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:254)
	at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:248)
	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
	... 48 more


[1] https://docs.wso2.com/display/AM210/Configuring+APIM+Analytics#309a01728f0a42d69613b37e4f5225c3

Suggested Labels:
bug

Affected Product Version:
APIM wum updated - wso2am-2.1.0.1510928473100.zip
APIM analytics pack wum updated - wso2am-analytics-2.1.0.1511528208814.zip

OS, DB, other environment details and versions:
APIM and analytics quick setup
Ubuntu 14
JDK - jdk1.8.0
DB - h2
Browser - Chrome Version 62.0.3202.62

@bhathiya
Copy link
Contributor

Was this a graceful shutdown?

@jsonds
Copy link
Author

jsonds commented Dec 21, 2017

I shut down the server using Ctrl+C in the terminal.

@bhathiya
Copy link
Contributor

That's a forceful shutdown and it can cause such issues.

@bhathiya bhathiya added this to the 2.1.0-update5 milestone Dec 21, 2017
@bhathiya
Copy link
Contributor

I'm closing the ticket as the approach is invalid.

@jsonds
Copy link
Author

jsonds commented Dec 21, 2017

@bhathiya Then we might need to mention that ctrl +c is not recommended or not the best approach since we have mentioned it in the doc [1] under stopping the server.
I will raise a doc jira pointing this.

[1] https://docs.wso2.com/display/AM210/Running+the+Product

@bhathiya
Copy link
Contributor

In a production environment, we don't run them in the foreground. So this method is anyway not relevant in that case. If we need to mention this in docs, we can ask to run as a service/background.

@jsonds
Copy link
Author

jsonds commented Dec 21, 2017

Agreed in a prod environment this will not be the case. However, a first time user may encounter this so let's document to run as a service or just mention about this behavior (in a suitable manner in docs ) if apim is configured with analytics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants