Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizer is now aware of cached subresults #171

Merged
merged 1 commit into from
Jan 21, 2019

Conversation

joka921
Copy link
Member

@joka921 joka921 commented Dec 30, 2018

  • QueryExecutionTrees try to find their results in the LRU cache after they are initialized.

  • If the result is found, it is pinned via a shared pointer

  • In this case the size estimate is exact and the cost estimate becomes 0

  • We still have to verify that this works indeed by e.g. using Niklas' autocompletion script with similar queries.

Copy link
Member

@niklas88 niklas88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good in general, but I'm still a bit concerned about the possible deadlock we might have seen. Looking at the code I think that if we look up the query execution tree for a tree we are currently building, we will find that in the cache with a ResultTable that is still being computed and trying to wait on it via awaitFinished() we would dead lock waiting on the same thread. However I'd expect this to happen every time so I'm a little confused. One way to mitigate this would be to only cache results which are fully computed, while this is clearly a race condition it would only result in a potential optimization loss.

@@ -29,6 +29,6 @@ add_library(engine
HasPredicateScan.cpp HasPredicateScan.h
Union.cpp Union.h
MultiColumnJoin.cpp MultiColumnJoin.h
)
../util/vector2d.h ../util/ApplyVarsizeOperation.h)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is an unrelated change for the planned ResultTable redesign?

@@ -35,25 +35,23 @@ string QueryExecutionTree::asString(size_t indent) {
for (size_t i = 0; i < indent; ++i) {
indentStr += " ";
}
if (_asString.size() == 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to keep this caching we could also remember the indent that was used, right?

@@ -0,0 +1,82 @@
//
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole file seems to be an unrelated change

@@ -0,0 +1,28 @@
//
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as does this

@@ -904,6 +904,7 @@ TEST(QueryExecutionTreeTest, testPoliticiansFriendWithScieManHatProj) {
// "| width: 3} [0] with textLimit = 1 | width: 6} [0]\n) "
// "| width: 6}",
// qet.asString());
/*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace the commented part, git will remember the old version if we ever need it again

<< _rootOperation->asString(indent + 2) << "\n"
<< indentStr << " qet-width: " << getResultWidth() << " ";
if (LOGLEVEL >= TRACE && _qec) {
os << " [estimated size: " << getSizeEstimate() << "]";
Copy link
Member

@niklas88 niklas88 Jan 7, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a tree was cached a new call would give new (exact) size estimates so this part would change. So I think with higher LOGLEVEL the optimization doesn't really work? Maybe these estimates really shouldn't be in the cache string. So maybe we should put this part behind a flag parameter that's only used when printing to the console?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. For now I left it since I would suggest putting the whole cache-key vs nice formatting business in a different PR since I want to do this in a clean way.

Copy link
Member

@niklas88 niklas88 Jan 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok then let's just completely remove this if block here so it isn't subtly broken just by changing the log level. Instead let's add a feature request issue for splitting this up and in that mention that we want to be able to see size estimates.

@@ -203,7 +208,9 @@ size_t QueryExecutionTree::getCostEstimate() {
// _____________________________________________________________________________
size_t QueryExecutionTree::getSizeEstimate() {
if (_sizeEstimate == std::numeric_limits<size_t>::max()) {
if (_qec) {
if (_cachedResult) {
_sizeEstimate = _cachedResult->size();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the size() might be bogus if ResultTable._status != ResultTable::FINISHED

@joka921
Copy link
Member Author

joka921 commented Jan 13, 2019

I fixed everything except for the one comment with the LOG(Trace) business, see there

- QueryExecutionTrees try to find their results in the LRU cache after they are initialized.

- If the result is found and already finished, it is pinned via a shared pointer
- In this case the size estimate is exact and the cost estimate becomes 0
- QueryExecutionTrees use their cached string representation only when
the indent matches. This enforces a deterministic cache key
Copy link
Member

@niklas88 niklas88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@niklas88 niklas88 merged commit 7a7ac7f into ad-freiburg:master Jan 21, 2019
@joka921 joka921 deleted the f.cachedOptimizer branch May 8, 2021 09:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants