Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-1179] Improve the Size calculation of Objects being added and managed in LRU cache #1038

Merged
merged 1 commit into from Jun 29, 2017

Conversation

sraghunandan
Copy link
Contributor

No description provided.

@asfgit
Copy link

asfgit commented Jun 15, 2017

Can one of the admins verify this patch?

@CarbonDataQA
Copy link

Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2499/

@@ -92,13 +94,14 @@ protected void checkAndLoadTableBlocks(AbstractIndex tableBlock,
TableBlockInfo blockInfo = tableBlockUniqueIdentifier.getTableBlockInfo();
long requiredMetaSize = CarbonUtil.calculateMetaSize(blockInfo);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better we can remove CarbonUtil.calculateMetaSize(blockInfo)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any particular reason? we are using the value to determine whether we need to load the block into memory.

@@ -233,8 +235,7 @@ private SegmentTaskIndexWrapper loadAndGetTaskIdToSegmentsMap(
taskIdToTableBlockInfoMap.entrySet().iterator();
long requiredSize =
calculateRequiredSize(taskIdToTableBlockInfoMap, absoluteTableIdentifier);
segmentTaskIndexWrapper
.setMemorySize(requiredSize + segmentTaskIndexWrapper.getMemorySize());

boolean isAddedToLruCache =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Putting the segment cache object prior to loading the segment b-tree could
allow dirty read in case of concurrent query.
Better to add the cache object after finishing the segment load.

@sraghunandan sraghunandan changed the title [WIP][CARBONDATA-1179] Improve the Size calculation of Objects being added and managed in LRU cache [CARBONDATA-1179] Improve the Size calculation of Objects being added and managed in LRU cache Jun 26, 2017
@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/128/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2702/

@sraghunandan sraghunandan force-pushed the lru_object_size_calculation branch 2 times, most recently from 28b0274 to 56e4d4b Compare June 27, 2017 12:06
@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/160/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2736/

@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/161/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2737/

@asfgit
Copy link

asfgit commented Jun 27, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/carbondata-pr-spark-1.6/666/

@asfgit
Copy link

asfgit commented Jun 27, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/carbondata-pr-spark-1.6/665/

@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/198/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2777/

@asfgit
Copy link

asfgit commented Jun 28, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/carbondata-pr-spark-1.6/707/

* @return
*/
public boolean tryPut(String columnIdentifier, long requiredSize) {
if (LOGGER.isDebugEnabled()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can better remove entry so that temp block memory also with in LRU limit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handled

@@ -45,6 +49,20 @@
private static final LogService LOGGER =
LogServiceFactory.getLogService(ReverseDictionaryCache.class.getName());

private static long sizeOfEmptyDictChunks =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these can be static final

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/205/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2784/

@CarbonDataQA
Copy link

Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2786/

@CarbonDataQA
Copy link

Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/207/

@asfgit
Copy link

asfgit commented Jun 28, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/carbondata-pr-spark-1.6/716/

@gvramana
Copy link
Contributor

LGTM

@asfgit asfgit merged commit 377dee9 into apache:master Jun 29, 2017
asfgit pushed a commit that referenced this pull request Jun 29, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants