Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get cube segment size #50

Closed
xingfengshen opened this issue Nov 11, 2014 · 4 comments
Closed

Can't get cube segment size #50

xingfengshen opened this issue Nov 11, 2014 · 4 comments
Assignees
Labels

Comments

@xingfengshen
Copy link

I build test_kylin_cube_with_slr_empty failed, it shows that can't get cube segment size. The error log as followed:
#13 Step Name: Load HFile to HBase Table

Start to execute command:?
?-input /tmp/kylin-e3f73338-edb8-4a1c-9013-e14d61f58f0c/test_kylin_cube_with_slr_empty/hfile/ -htablename KYLIN_QA_CUBE_HYIZYTA36N -cubename test_kylin_cube_with_slr_empty
Command execute return code 0

Failed with Exception:java.lang.RuntimeException: Can't get cube segment size.
at com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobFlowListener.java:245)
at com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java:99)
at org.quartz.core.QuartzScheduler.notifyJobListenersWasExecuted(QuartzScheduler.java:1985)
at org.quartz.core.JobRunShell.notifyJobListenersComplete(JobRunShell.java:340)
at org.quartz.core.JobRunShell.run(JobRunShell.java:224)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)

other steps are successful, which model will set the segment size?

@binmahone
Copy link

job engine will retrieve the stats from hadoop log after the cube building is done. it seems the job engine failed to find the log it needed. please check the log output of a job step called "Convert Cuboid Data to HFile" (see attached image) to see if there's anything like

HDFS: Number of bytes read=3851451742573

screen shot 2014-11-11 at 1 05 43 pm
screen shot 2014-11-11 at 1 04 46 pm

@binmahone
Copy link

btw, if you could leave us your contact info, we might be able to communicate directly with you, so that we could better understand your requirements.

@lukehan lukehan added the bug label Nov 11, 2014
@lukehan lukehan added this to the v0.7 Release milestone Nov 11, 2014
@xingfengshen
Copy link
Author

Hi, we have fixed this issue.

1st: we start JobHistoryServer
java.net.ConnectException: to 0.0.0.0:10020 failed on connection exception
config hadoop的/etc/hadoop/mapred-site.xml file

mapreduce.jobhistory.address
host:10020

execute this cmd:
mr-jobhistory-daemon.sh start historyserver

2nd: because HBase-0.98.0-hadoop2 use hadoop 2.2.0, but in our cluster the hadoop version is 2.4.1, so it will cause an exception, details can see https://issues.apache.org/jira/browse/MAPREDUCE-5831, so we replace the hadoop jars to 2.4.1

now we build the cube successful.

Thank you very much!

@lukehan
Copy link
Contributor

lukehan commented Nov 12, 2014

Hi @xfhap, this is really great reference for others to setup their cluster.

Thank you very much.
Luke

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants