Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce SystemSchema tables (#5989) #6094

Merged
merged 61 commits into from Oct 11, 2018
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
44c6337
Added SystemSchema with following tables (#5989)
Jul 31, 2018
52b6115
Add documentation for system schema
Aug 1, 2018
335fc2b
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 1, 2018
4f34202
Fix static-analysis warnings
Aug 1, 2018
59e996b
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 2, 2018
7991720
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 13, 2018
3fd41de
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 19, 2018
816552e
Address PR comments
Aug 22, 2018
d74040c
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 22, 2018
a8038ee
Fix a test
Aug 22, 2018
456a0ad
Try to fix a test
Aug 22, 2018
5728364
Fix a bug around replica count
Aug 22, 2018
05fd4ce
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 27, 2018
cec8737
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 30, 2018
54dd64c
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 30, 2018
c99f027
Merge branch 'master' of github.com:druid-io/druid into system-table
Aug 31, 2018
7a57b1e
rename io.druid to org.apache.druid
Aug 31, 2018
3cb0f52
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 7, 2018
cf18959
Major change is to make tasks and segment queries streaming
Sep 7, 2018
68d45a0
Fix docs, make num_rows column nullable, some unit test changes
Sep 11, 2018
0239f94
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 11, 2018
4e3b013
make num_rows column type long, allow it to be null
Sep 11, 2018
495883a
Filter null rows for segments table from Linq4j enumerable
Sep 11, 2018
b6fe553
change num_replicas datatype to long in segments table
Sep 11, 2018
bab61c6
Fix some tests and address comments
Sep 22, 2018
14064c5
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 22, 2018
1f44382
Doc updates, other PR comments
Sep 24, 2018
8f7b0b6
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 24, 2018
b66a81b
Update tests
Sep 25, 2018
e92237f
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 25, 2018
9efbe96
Merge branch 'master' of github.com:druid-io/druid into system-table
Sep 27, 2018
95b5bc8
Address comments
Oct 1, 2018
b605ab9
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 1, 2018
1569aa5
Fix teamcity warning, change the getQueryableServer in TimelineServer…
Oct 1, 2018
b1a219a
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 1, 2018
ba7afe9
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 2, 2018
44d7285
Fix compilation after rebase
Oct 2, 2018
be5e9d7
Use the stream API from AuthorizationUtils
Oct 2, 2018
f53600f
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 2, 2018
100fa46
Added LeaderClient interface and NoopDruidLeaderClient class
Oct 2, 2018
a0dc468
Revert "Added LeaderClient interface and NoopDruidLeaderClient class"
Oct 3, 2018
0f96043
Make the naming consistent to server_segments for the join table
Oct 3, 2018
689f655
Try to fix a test in CalciteQueryTest due to rename of server_segments
Oct 3, 2018
3806a9c
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 3, 2018
dc9fa4c
Fix the json output format in the coordinator API
Oct 4, 2018
132404d
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 4, 2018
7ffc2b4
Use annonymous class object instead of mock for DruidLeaderClient in …
Oct 4, 2018
1bdff58
Fix test failures, type long/BIGINT can be nullable
Oct 4, 2018
26acfe8
Revert long nullability to fix tests
Oct 5, 2018
3fbbdc6
Fix style for tests
Oct 5, 2018
ccc7f18
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 5, 2018
23112a5
PR comments
Oct 8, 2018
b84d728
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 8, 2018
3cd1722
Address PR comments
Oct 8, 2018
1022693
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 8, 2018
d63469d
Add the missing BytesAccumulatingResponseHandler class
Oct 8, 2018
9f396aa
Use Sequences.withBaggage in DruidPlanner
Oct 8, 2018
e0657e5
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 9, 2018
83c74fe
Fix docs, add comments
Oct 9, 2018
1873c92
Merge branch 'master' of github.com:druid-io/druid into system-table
Oct 9, 2018
892ee80
Close the iterator if hasNext returns false
Oct 10, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Expand Up @@ -25,6 +25,7 @@
import io.druid.benchmark.datagen.BenchmarkSchemas;
import io.druid.benchmark.datagen.SegmentGenerator;
import io.druid.data.input.Row;
import io.druid.discovery.DruidLeaderClient;
import io.druid.java.util.common.Intervals;
import io.druid.java.util.common.granularity.Granularities;
import io.druid.java.util.common.guava.Sequence;
Expand All @@ -43,9 +44,11 @@
import io.druid.sql.calcite.planner.PlannerResult;
import io.druid.sql.calcite.util.CalciteTests;
import io.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import io.druid.sql.calcite.util.TestServerInventoryView;
import io.druid.timeline.DataSegment;
import io.druid.timeline.partition.LinearShardSpec;
import org.apache.commons.io.FileUtils;
import org.easymock.EasyMock;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
Expand Down Expand Up @@ -106,16 +109,20 @@ public void setup()
final QueryableIndex index = segmentGenerator.generate(dataSegment, schemaInfo, Granularities.NONE, rowsPerSegment);
final QueryRunnerFactoryConglomerate conglomerate = CalciteTests.queryRunnerFactoryConglomerate();
final PlannerConfig plannerConfig = new PlannerConfig();
final DruidLeaderClient druidLeaderClient = EasyMock.createMock(DruidLeaderClient.class);

this.walker = new SpecificSegmentsQuerySegmentWalker(conglomerate).add(dataSegment, index);
plannerFactory = new PlannerFactory(
CalciteTests.createMockSchema(walker, plannerConfig),
new TestServerInventoryView(walker.getSegments()),
CalciteTests.createMockQueryLifecycleFactory(walker),
CalciteTests.createOperatorTable(),
CalciteTests.createExprMacroTable(),
plannerConfig,
AuthTestUtils.TEST_AUTHORIZER_MAPPER,
CalciteTests.getJsonMapper()
CalciteTests.getJsonMapper(),
druidLeaderClient,
druidLeaderClient
);
groupByQuery = GroupByQuery
.builder()
Expand Down
75 changes: 75 additions & 0 deletions docs/content/querying/sql.md
Expand Up @@ -430,6 +430,10 @@ plan SQL queries. This metadata is cached on broker startup and also updated per
[SegmentMetadata queries](segmentmetadataquery.html). Background metadata refreshing is triggered by
segments entering and exiting the cluster, and can also be throttled through configuration.

Druid exposes system information through special system tables. There are two such schemas available : Information Schema and System Schema
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spacing around the colon is weird: it should have a space after, but not before. Please also add some information about what each table is useful for (INFORMATION_SCHEMA provides details about tables/column types, and SYS provides information about Druid internals like segments/tasks/servers).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed


## INFORMATION SCHEMA

You can access table and column metadata through JDBC using `connection.getMetaData()`, or through the
INFORMATION_SCHEMA tables described below. For example, to retrieve metadata for the Druid
datasource "foo", use the query:
Expand Down Expand Up @@ -481,6 +485,77 @@ SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = 'druid' AND TABLE_
|COLLATION_NAME||
|JDBC_TYPE|Type code from java.sql.Types (Druid extension)|

## SYSTEM SCHEMA

SYSTEM_TABLES provide visibility into the druid segments, servers and tasks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please correct capitalization and naming:

The SYS schema provides visibility into Druid segments, servers and tasks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

For example to retrieve all segments for datasource "wikipedia", use the query:
```sql
select * from SYS.SEGMENTS where DATASOURCE='wikipedia';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lowercase seems more Druid-y, so I think I'd prefer SELECT * FROM sys.segments WHERE dataSource = 'wikipedia'. The only reason INFORMATION_SCHEMA isn't like this is because it's a standard thing and uppercase seems more normal for it from looking at other databases.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, will make everything lowercase in docs and code. For the column names, do they need to be camelCase like dataSource, isPublished etc. or datasource, is_published or keeping them uppercase is fine ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, good question. I think in SQL underscores are more normal, although data_source is very weird so let's not do that. Probably datasource is ok.

If anyone else has an opinion please go for it.

```

### SEGMENTS table
Segments tables provides details on all the segments, both published and served(but not published).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me this reads a bit unclear, I'd suggest trying something like:

Segments tables provides details on all Druid segments, whether they are published yet or not.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed



|Column|Notes|
|------|-----|
|SEGMENT_ID||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include a description for all of these columns, and capitalize the first letter of each description.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include "size", "version", and "partition_num" too -- they are all useful. I'd also include "replicas" which should be the number of replicas currently being served.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

|DATASOURCE||
|START||
|END||
|IS_PUBLISHED|segment in metadata store|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be clearer to expand this a bit: "True if this segment has been published to the metadata store."

Similar comment for the other ones.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added more description

|IS_AVAILABLE|segment is being served|
|IS_REALTIME|segment served on a realtime server|
|PAYLOAD|jsonified datasegment payload|

### SERVERS table
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be a blurb here explaining what this table is all about. Currently, it's listing all data servers (anything that might host a segment) and that includes both historicals and ingestion tasks.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added blurb



|Column|Notes|
|------|-----|
|SERVER||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include a description for all of these columns, including:

  • Server should detail the expected format (host:port? does it include scheme?)
  • Scheme should be somewhere in here. Possibly a separate field "scheme".
  • Server type should list the possible server types.
  • Max size should reference the historical docs and call out that it's referring to the druid.server.maxSize property.
  • Anything else that seems useful!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added description

|SERVER_TYPE||
|TIER||
|CURRENT_SIZE||
|MAX_SIZE||

To retrieve all servers information, use the query
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better grammar: "To retrieve information about all servers, use the query:"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed

```sql
select * from SYS.SERVERS;
```

### SEGMENTSERVERS table

SEGMENTSERVERS is used to join SEGMENTS with SERVERS table
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SEGMENT_SERVERS would be a nicer name, I think. I think we should provide an example too.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to segment_servers lowercase


|Column|Notes|
|------|-----|
|SERVER||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please include in the notes which column these correspond to in the other tables.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

|SEGMENT_ID||

### TASKS table

TASKS table provides tasks info from overlord.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

The TASKS table provides information about active and recently-completed indexing tasks.

And link "indexing tasks" to a useful page about that.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


|Column|Notes|
|------|-----|
|TASK_ID||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should all have comments too.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes added

|TYPE||
|DATASOURCE||
|CREATED_TIME||
|QUEUE_INSERTION_TIME||
|STATUS||
|RUNNER_STATUS||
|DURATION||
|LOCATION||
|ERROR_MSG||

For example, to retrieve tasks information filtered by status, use the query
```sql
select * from SYS.TASKS where STATUS='FAILED';
```


## Server configuration

The Druid SQL server is configured through the following properties on the broker.
Expand Down
Expand Up @@ -23,7 +23,9 @@
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Iterables;
import io.druid.client.TimelineServerView;
import io.druid.common.config.NullHandling;
import io.druid.discovery.DruidLeaderClient;
import io.druid.java.util.common.granularity.Granularities;
import io.druid.query.Druids;
import io.druid.query.QueryDataSource;
Expand Down Expand Up @@ -61,8 +63,10 @@
import io.druid.sql.calcite.util.CalciteTests;
import io.druid.sql.calcite.util.QueryLogHook;
import io.druid.sql.calcite.util.SpecificSegmentsQuerySegmentWalker;
import io.druid.sql.calcite.util.TestServerInventoryView;
import io.druid.timeline.DataSegment;
import io.druid.timeline.partition.LinearShardSpec;
import org.easymock.EasyMock;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
Expand Down Expand Up @@ -126,19 +130,24 @@ public void setUp() throws Exception

final PlannerConfig plannerConfig = new PlannerConfig();
final DruidSchema druidSchema = CalciteTests.createMockSchema(walker, plannerConfig);
final TimelineServerView serverView = new TestServerInventoryView(walker.getSegments());
final DruidLeaderClient druidLeaderClient = EasyMock.createMock(DruidLeaderClient.class);
final DruidOperatorTable operatorTable = new DruidOperatorTable(
ImmutableSet.of(new QuantileSqlAggregator()),
ImmutableSet.of()
);

plannerFactory = new PlannerFactory(
druidSchema,
serverView,
CalciteTests.createMockQueryLifecycleFactory(walker),
operatorTable,
CalciteTests.createExprMacroTable(),
plannerConfig,
AuthTestUtils.TEST_AUTHORIZER_MAPPER,
CalciteTests.getJsonMapper()
CalciteTests.getJsonMapper(),
druidLeaderClient,
druidLeaderClient
);
}

Expand Down
6 changes: 6 additions & 0 deletions server/src/main/java/io/druid/client/BrokerServerView.java
Expand Up @@ -322,4 +322,10 @@ private void runTimelineCallbacks(final Function<TimelineCallback, CallbackActio
);
}
}

@Override
public Map<String, QueryableDruidServer> getClients()
{
return clients;
}
}
5 changes: 5 additions & 0 deletions server/src/main/java/io/druid/client/TimelineServerView.java
Expand Up @@ -19,6 +19,7 @@

package io.druid.client;

import io.druid.client.selector.QueryableDruidServer;
import io.druid.client.selector.ServerSelector;
import io.druid.query.DataSource;
import io.druid.query.QueryRunner;
Expand All @@ -27,6 +28,7 @@
import io.druid.timeline.TimelineLookup;

import javax.annotation.Nullable;
import java.util.Map;
import java.util.concurrent.Executor;

/**
Expand All @@ -36,6 +38,9 @@ public interface TimelineServerView extends ServerView
@Nullable
TimelineLookup<String, ServerSelector> getTimeline(DataSource dataSource);

@Nullable
Map<String, QueryableDruidServer> getClients();

<T> QueryRunner<T> getQueryRunner(DruidServer server);

/**
Expand Down
Expand Up @@ -49,6 +49,7 @@
import org.junit.Before;
import org.junit.Test;

import javax.annotation.Nullable;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
Expand Down Expand Up @@ -225,6 +226,13 @@ public VersionedIntervalTimeline<String, ServerSelector> getTimeline(DataSource
return timeline;
}

@Nullable
@Override
public Map<String, QueryableDruidServer> getClients()
{
throw new UnsupportedOperationException();
}

@Override
public void registerTimelineCallback(final Executor exec, final TimelineCallback callback)
{
Expand Down
Expand Up @@ -2656,6 +2656,13 @@ public VersionedIntervalTimeline<String, ServerSelector> getTimeline(DataSource
return timeline;
}

@Nullable
@Override
public Map<String, QueryableDruidServer> getClients()
{
throw new UnsupportedOperationException();
}

@Override
public <T> QueryRunner<T> getQueryRunner(DruidServer server)
{
Expand Down
26 changes: 25 additions & 1 deletion sql/src/main/java/io/druid/sql/calcite/planner/Calcites.java
Expand Up @@ -19,9 +19,13 @@

package io.druid.sql.calcite.planner;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Preconditions;
import com.google.common.io.BaseEncoding;
import com.google.common.primitives.Chars;
import io.druid.client.BrokerServerView;
import io.druid.client.TimelineServerView;
import io.druid.discovery.DruidLeaderClient;
import io.druid.java.util.common.DateTimes;
import io.druid.java.util.common.IAE;
import io.druid.java.util.common.ISE;
Expand All @@ -32,6 +36,7 @@
import io.druid.server.security.AuthorizerMapper;
import io.druid.sql.calcite.schema.DruidSchema;
import io.druid.sql.calcite.schema.InformationSchema;
import io.druid.sql.calcite.schema.SystemSchema;
import org.apache.calcite.jdbc.CalciteSchema;
import org.apache.calcite.rel.type.RelDataType;
import org.apache.calcite.rel.type.RelDataTypeFactory;
Expand Down Expand Up @@ -98,11 +103,30 @@ public static Charset defaultCharset()
return DEFAULT_CHARSET;
}

public static SchemaPlus createRootSchema(final Schema druidSchema, final AuthorizerMapper authorizerMapper)
public static SchemaPlus createRootSchema(
final TimelineServerView serverView,
final Schema druidSchema,
final AuthorizerMapper authorizerMapper,
final DruidLeaderClient coordinatorDruidLeaderClient,
final DruidLeaderClient overlordDruidLeaderClient,
final ObjectMapper jsonMapper
)
{
final SchemaPlus rootSchema = CalciteSchema.createRootSchema(false, false).plus();
rootSchema.add(DruidSchema.NAME, druidSchema);
rootSchema.add(InformationSchema.NAME, new InformationSchema(rootSchema, authorizerMapper));
if (serverView instanceof BrokerServerView) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really necessary? It looks like you added getClients to TimelineServerView, so we shouldn't need to cast it to a BrokerServerView.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't seem necessary, removed the cast.

rootSchema.add(
SystemSchema.NAME,
new SystemSchema(
(BrokerServerView) serverView,
authorizerMapper,
coordinatorDruidLeaderClient,
overlordDruidLeaderClient,
jsonMapper
)
);
}
return rootSchema;
}

Expand Down
24 changes: 22 additions & 2 deletions sql/src/main/java/io/druid/sql/calcite/planner/PlannerFactory.java
Expand Up @@ -21,6 +21,10 @@

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.inject.Inject;
import io.druid.client.TimelineServerView;
import io.druid.client.coordinator.Coordinator;
import io.druid.client.indexing.IndexingService;
import io.druid.discovery.DruidLeaderClient;
import io.druid.guice.annotations.Json;
import io.druid.math.expr.ExprMacroTable;
import io.druid.server.QueryLifecycleFactory;
Expand Down Expand Up @@ -57,36 +61,52 @@ public class PlannerFactory
.build();

private final DruidSchema druidSchema;
private final TimelineServerView serverView;
private final QueryLifecycleFactory queryLifecycleFactory;
private final DruidOperatorTable operatorTable;
private final ExprMacroTable macroTable;
private final PlannerConfig plannerConfig;
private final ObjectMapper jsonMapper;
private final AuthorizerMapper authorizerMapper;
private final DruidLeaderClient coordinatorDruidLeaderClient;
private final DruidLeaderClient overlordDruidLeaderClient;

@Inject
public PlannerFactory(
final DruidSchema druidSchema,
final TimelineServerView serverView,
final QueryLifecycleFactory queryLifecycleFactory,
final DruidOperatorTable operatorTable,
final ExprMacroTable macroTable,
final PlannerConfig plannerConfig,
final AuthorizerMapper authorizerMapper,
final @Json ObjectMapper jsonMapper
final @Json ObjectMapper jsonMapper,
final @Coordinator DruidLeaderClient coordinatorDruidLeaderClient,
final @IndexingService DruidLeaderClient overlordDruidLeaderClient
)
{
this.druidSchema = druidSchema;
this.serverView = serverView;
this.queryLifecycleFactory = queryLifecycleFactory;
this.operatorTable = operatorTable;
this.macroTable = macroTable;
this.plannerConfig = plannerConfig;
this.authorizerMapper = authorizerMapper;
this.jsonMapper = jsonMapper;
this.coordinatorDruidLeaderClient = coordinatorDruidLeaderClient;
this.overlordDruidLeaderClient = overlordDruidLeaderClient;
}

public DruidPlanner createPlanner(final Map<String, Object> queryContext)
{
final SchemaPlus rootSchema = Calcites.createRootSchema(druidSchema, authorizerMapper);
final SchemaPlus rootSchema = Calcites.createRootSchema(
serverView,
druidSchema,
authorizerMapper,
coordinatorDruidLeaderClient,
overlordDruidLeaderClient,
jsonMapper
);
final PlannerContext plannerContext = PlannerContext.create(
operatorTable,
macroTable,
Expand Down
Expand Up @@ -91,7 +91,6 @@ public class DruidSchema extends AbstractSchema
private static final int MAX_SEGMENTS_PER_QUERY = 15000;

private final QueryLifecycleFactory queryLifecycleFactory;
private final TimelineServerView serverView;
private final PlannerConfig config;
private final ViewManager viewManager;
private final ExecutorService cacheExec;
Expand Down Expand Up @@ -134,7 +133,7 @@ public DruidSchema(
)
{
this.queryLifecycleFactory = Preconditions.checkNotNull(queryLifecycleFactory, "queryLifecycleFactory");
this.serverView = Preconditions.checkNotNull(serverView, "serverView");
Preconditions.checkNotNull(serverView, "serverView");
this.config = Preconditions.checkNotNull(config, "config");
this.viewManager = Preconditions.checkNotNull(viewManager, "viewManager");
this.cacheExec = ScheduledExecutors.fixed(1, "DruidSchema-Cache-%d");
Expand Down