New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement "citus local table" creation logic and a udf #3852
Implement "citus local table" creation logic and a udf #3852
Conversation
* - its distribution method will be DISTRIBUTE_BY_NONE, | ||
* - its replication model will be ReplicationModel, | ||
* - its replication factor will be set to 1. | ||
* On the contrary of reference tables, a citus local table has only one placement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line of the comment is slightly confusing: On the contrary of reference tables
I get what you mean, but the sentence should be separated. Similar to ref. tables, it has 1 placement, in addition to that the restriction is the placement is only allowed on the coordinator.
|
||
EnsureTableOwner(relationOid); | ||
|
||
/* we allow creating citus local tables only from relations with RELKIND_RELATION */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should be allowing any kind of relation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually EnsureRelationKindSupported
allows us to create the citus table if the table is
- a regular table or
- a foreign table or
- a partitioned table and
- not an inherited table.
This check is to prevent views, sequences etc.
* the nodes such that we can create foreign keys and joins work immediately | ||
* after creation. | ||
*/ | ||
EnsureReferenceTablesExistOnAllNodes(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we ever be in a state where reference tables are not replicated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As #3637 claims, yes
8eb5401
to
4b6ae1e
Compare
4b6ae1e
to
7cf6515
Compare
7cf6515
to
96452cb
Compare
d08e70c
to
8476a4d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have done an initial review, I will also do tests locally for different scenarios. Submitting the review for faster iteration.
-- any foreign key between citus local tables and other tables cannot be set for now | ||
-- most should error out (for now with meaningless error messages) | ||
|
||
-- between citus local tables |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also have tests where a table references a some other table before creating the citus local table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As now we don't support initial fkeys with other tables, I will add some tests with self referencing fkeys
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
28479b3
to
1615b56
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We seem to have a minor bug with schemas:
CREATE SCHEMA "CiTUS!LocalTables";
-- create table with weird names
CREATE TABLE "CiTUS!LocalTables"."LocalTabLE.1!?!"(id int, "TeNANt_Id" int);
SELECT create_citus_local_table('"CiTUS!LocalTables"."LocalTabLE.1!?!"');
ERROR: syntax error at or near ".1"
And the backtrace:
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 3.1
* frame #0: 0x000000010293e467 postgres`errstart(elevel=20, filename="scan.l", lineno=1149, funcname="scanner_yyerror", domain=0x0000000000000000) at elog.c:263:14
frame #1: 0x00000001029a3da5 postgres`scanner_yyerror.cold.2 + 53
frame #2: 0x000000010266c577 postgres`scanner_yyerror(message=<unavailable>, yyscanner=<unavailable>) at scan.l:1145:3
frame #3: 0x000000010266aef2 postgres`base_yyerror(yylloc=<unavailable>, yyscanner=<unavailable>, msg=<unavailable>) at gram.y:15510:2
frame #4: 0x0000000102667765 postgres`base_yyparse(yyscanner=0x00007fdb49013f10) at gram.c:45676:7
frame #5: 0x000000010266ddb2 postgres`raw_parser(str=<unavailable>) at parser.c:53:13
frame #6: 0x00000001028566f4 postgres`pg_parse_query(query_string="ALTER TABLE \"CiTUS!LocalTables\".\"LocalTabLE.1!?!\" RENAME TO LocalTabLE.1!?!_102087;") at postgres.c:641:23
frame #7: 0x000000010521a74f citus.so`ParseTreeRawStmt(ddlCommand="ALTER TABLE \"CiTUS!LocalTables\".\"LocalTabLE.1!?!\" RENAME TO LocalTabLE.1!?!_102087;") at worker_data_fetch_protocol.c:574:24
frame #8: 0x000000010521a2d9 citus.so`ParseTreeNode(ddlCommand=<unavailable>) at worker_data_fetch_protocol.c:560:24
frame #9: 0x000000010519b359 citus.so`RenameRelationToShardRelation(shellRelationId=<unavailable>, shardId=<unavailable>) at create_citus_local_table.c:260:20
frame #10: 0x000000010519b091 citus.so`CreateCitusLocalTableShard(relationId=1913196, shardId=102087) at create_citus_local_table.c:231:2
frame #11: 0x000000010519afab citus.so`CreateCitusLocalTable(shellRelationId=1913196) at create_citus_local_table.c:182:2
frame #12: 0x000000010519ae97 citus.so`create_citus_local_table(fcinfo=<unavailable>) at create_citus_local_table.c:90:2
frame #13: 0x00000001026f82c1 postgres`ExecInterpExpr(state=0x00007fdb4706ef98, econtext=0x00007fdb4706ec88, isnull=0x00007ffeed68ae97) at execExprInterp.c:649:8
frame #14: 0x00000001027272cd postgres`ExecEvalExprSwitchContext(state=0x00007fdb4706ef98, econtext=0x00007fdb4706ec88, isNull=<unavailable>) at executor.h:307:13
frame #15: 0x0000000102727280 postgres`ExecProject(projInfo=<unavailable>) at executor.h:341:9
frame #16: 0x00000001026fe890 postgres`ExecutePlan(estate=0x00007fdb4706e918, planstate=0x00007fdb4706eb70, use_parallel_mode=<unavailable>, operation=CMD_SELECT, sendTuples=<unavailable>, numberTuples=0, direction=ForwardScanDirection, dest=0x00007fdb4a81bda8, execute_once=<unavailable>) at execMain.c:1646:10
frame #17: 0x00000001026fe762 postgres`standard_ExecutorRun(queryDesc=0x00007fdb4701ed18, direction=<unavailable>, count=0, execute_once=<unavailable>) at execMain.c:364:3
frame #18: 0x00000001051ca81e citus.so`CitusExecutorRun(queryDesc=0x00007fdb4701ed18, direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at multi_executor.c:203:4
frame #19: 0x00000001053ab521 pg_stat_statements.so`pgss_ExecutorRun(queryDesc=0x00007fdb4701ed18, direction=ForwardScanDirection, count=0, execute_once=<unavailable>) at pg_stat_statements.c:891:4
frame #20: 0x000000010285b9a2 postgres`PortalRunSelect(portal=0x00007fdb47833118, forward=<unavailable>, count=0, dest=<unavailable>) at pquery.c:929:4
frame #21: 0x000000010285b524 postgres`PortalRun(portal=0x00007fdb47833118, count=9223372036854775807, isTopLevel=<unavailable>, run_once=<unavailable>, dest=0x00007fdb4a81bda8, altdest=0x00007fdb4a81bda8, completionTag="") at pquery.c:770:18
frame #22: 0x0000000102858be8 postgres`exec_simple_query(query_string="SELECT create_citus_local_table('\"CiTUS!LocalTables\".\"LocalTabLE.1!?!\"');") at postgres.c:1215:10
frame #23: 0x0000000102858395 postgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, dbname=<unavailable>, username=<unavailable>) at postgres.c:0
frame #24: 0x00000001027e2fc6 postgres`BackendRun(port=0x0000000102ba4a7a) at postmaster.c:4448:2
frame #25: 0x00000001027e2889 postgres`BackendStartup(port=<unavailable>) at postmaster.c:4139:3
frame #26: 0x00000001027e2065 postgres`ServerLoop at postmaster.c:1704:7
frame #27: 0x00000001027e0117 postgres`PostmasterMain(argc=3, argv=0x00007fdb46c03210) at postmaster.c:1377:11
frame #28: 0x000000010274a1b6 postgres`main(argc=3, argv=0x00007fdb46c03210) at main.c:228:3
frame #29: 0x00007fff631f63d5 libdyld.dylib`start + 1
(lldb) c
Process 12418 resuming
This is blocking my testing at the moment. These are the basic tests that I'm considering to expand: https://gist.github.com/onderkalaci/a9856bfc8b242ee154e55b6486a48524
So, maybe make sure that we support all before I restart testing. (there might be some syntax errors in the file)
EnsureTableOwner(relationId); | ||
|
||
/* we allow creating citus local tables only from relations with RELKIND_RELATION */ | ||
EnsureRelationKindSupported(relationId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we simply move if (PartitionTable(relationId) || PartitionedTable(relationId))
check before this one? I'm OK not having a new function because we're giving the correct errors.
* - the foreign keys referencing to the relation. | ||
*/ | ||
static void | ||
CreateCitusLocalTableShard(Oid relationId, uint64 shardId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we call this function as RenamePostgresTableToCitusShard
to make it more explicit?
f5210fd
to
8710dd0
Compare
RETURNS void | ||
LANGUAGE C STRICT | ||
AS 'MODULE_PATHNAME', $$create_citus_local_table$$; | ||
COMMENT ON FUNCTION create_citus_local_table(table_name regclass) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing pg_catalog.
* Include DEFAULT clauses for columns getting their default values from | ||
* a sequence. | ||
*/ | ||
bool includeSequenceDefaults = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, with this one we are slightly diverging from the regular shard creation. On hash distributed tables, we don't create the sequences on the shards ( see CreateShardsOnWorkers
), we only have sequences on the coordinator shell tables. And, we evaluate sequences on the coordinator.
As far as I can tell, this is OK. Let's sign-up @marcocitus's office hour next week, and we explain the latest implementation and can ask these types of edge cases to him.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it turns out that when there is a sequence in the table, we cannot enable MX:
Tables in this gist: https://gist.github.com/onderkalaci/a9856bfc8b242ee154e55b6486a48524:
select start_metadata_sync_to_node(nodename, nodeport) FROM pg_dist_node;
ERROR: relation "CiTUS!LocalTables.LocalTabLE.1!?!_serial_data_seq" does not exist
CONTEXT: while executing command on localhost:9700
I think that is because of the above note. getOwnedSequences
returns the sequence for the shard but not for the shell table. That's why we probably need to drop the sequence on the shard
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will look into that Monday
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wonder whether we should make a pass over pg_depend to update dependencies to the shell table OID.
Semi-related: 9ed67a0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
onderkalaci 4 days ago Member
Well, it turns out that when there is a sequence in the table, we cannot enable MX:
For the sequences implied by column definitions automatically (serial, bigserial etc), I implemented the following patch: fbc981f . This commit drops default column definitions from shard relation and grants ownerships of owned sequences to the corresponding columns of the shell table.
However, we will still have two problems around sequences as I mentioned in #3885 & #3884.
They point to existing problems around the other citus table types, and obviously will apply to citus local tables too.
I wonder if I should work on implementing a solution (like the one in 9ed67a0 done by Marco) in another pr before merging this pr, or should we leave it to another time ?
(cc: @marcocitus & cc: @onderkalaci )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or should we leave it to another time ?
I think we can leave it to a later time
*/ | ||
EnsureReferenceTablesExistOnAllNodes(); | ||
|
||
List *shellTableDDLEvents = GetShellTableDDLEventsForCitusLocalTable(relationId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for using blocks here, I think we can further use more blocks by separating the part until this like into another function and after the call InsertMetadataForCitusLocalTable to another function as pre create table operations and post create table operations. I think there might be common operations between different table types and we might need to call the methods from those places as well, but still I suggest creating these methods at least for citus local tables in this PR so that this function is even in better form and later in a separate PR we can use the methods(with small changes) in other places as well.
* Include DEFAULT clauses for columns getting their default values from | ||
* a sequence. | ||
*/ | ||
bool includeSequenceDefaults = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it turns out that when there is a sequence in the table, we cannot enable MX:
Tables in this gist: https://gist.github.com/onderkalaci/a9856bfc8b242ee154e55b6486a48524:
select start_metadata_sync_to_node(nodename, nodeport) FROM pg_dist_node;
ERROR: relation "CiTUS!LocalTables.LocalTabLE.1!?!_serial_data_seq" does not exist
CONTEXT: while executing command on localhost:9700
I think that is because of the above note. getOwnedSequences
returns the sequence for the shard but not for the shell table. That's why we probably need to drop the sequence on the shard
|
||
FinalizeCitusLocalTableCreation(relationId); | ||
|
||
relation_close(relation, ExclusiveLock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we releasing the lock? I think we should close with NoLock
, keep the lock until the end of the transaction. (Still, adding this to discussion with Marco)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now as we pushed all the operations done on relation into that function (CreateCitusLocalTable
), isn't it good to release the lock at the end ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Talha's test reveals the issue. Holding the lock until the transaction ends is necessary
open two backends
create a table 'create table local(a int);' in the first one
run select create_citus_local_table('local') but stop it right after ConvertLocalTableToShard is executed, so at this point the postgres table is converted to a shard but shell table(empty table) is not created yet.
in the other backend run 'select create_citus_local_table('local');', now this will block in Relation relation = relation_open(relationId, ExclusiveLock); at CreateCitusLocalTable
Continue in the first one, so it creates the citus local table
Now as the lock is released from the first backend, the second one continues and it doesn't fail and in pg_dist_partition we have an extra entry now:
(please retry this test after changing to NoLock
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try and update here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I also tried locally, " not releasing the lock" doesn't help to solve the problem that the @SaitTalhaNisanci mentioned. This is because, either we release the lock at the end of xact or when the all the creation logic is done in CreateCitusLocalTable
, the other backend would still try to create a citus local table from the relationId
and this is indeed valid as that relation is still not a citus relation, but it became a shard relation.
So, I propose to check for the the relationId
with RelationIsAKnownShard
function to error out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it sound good ?
b2cbffa
to
e8ec5bc
Compare
6af50b8
to
da74ce7
Compare
This reverts commit 48480d3.
Now I can see codecov report and it shows %99, I think that is pretty good :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some tests and seemed to work as expected. As we will do more tests when merging to master, I am approving.
CREATE TABLE citus_local_table_1 (a int primary key); | ||
|
||
-- this should fail as coordinator is removed from pg_dist_node | ||
SELECT create_citus_local_table('citus_local_table_1'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails with the call ErrorIfCoordinatorNotAddedAsWorkerNode
. So the only single line that is not covered in this PR is in CoordinatorNode
:
ereport(ERROR, (errmsg("couldn't find the coordinator node in the metadata "
"as it is not added as a worker")));
Currently it seems a bit hard to hit this line. I have tried the following:
- in one backend stopped the execution right after
ErrorIfCoordinatorNotAddedAsWorkerNode
while creating a citus local table - in another backend tried to remove the coordinator from the cluster
As expected the remove operation failed with:
[local] talha@talha:9700-30867=# SELECT 1 FROM master_remove_node('localhost', 9700);
ERROR: you cannot remove the primary node of a node group which has shard placements
Time: 28518.667 ms (00:28.519)
So this is expected as the second backend is trying to get an ExclusiveLock
on pg_dist_node
in ModifiableWorkerNode
, which is held by the first backend.
-- cannot create citus local table from an existing citus table | ||
SELECT create_distributed_table('distributed_table', 'a'); | ||
|
||
-- this will error out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe move the above comment here -- cannot create citus local table from an existing citus table
-- this will error out | ||
SELECT create_citus_local_table('distributed_table'); | ||
|
||
-- partitiond table tests -- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partitiond -> partitioned
BEGIN; | ||
CREATE TABLE citus_local_table PARTITION OF partitioned_table FOR VALUES FROM (20) TO (30); | ||
|
||
-- this should fail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe move cannot create citus local table as a partition of a local table
here
|
||
SELECT create_citus_local_table('citus_local_table'); | ||
|
||
-- this should fail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in general instead of writing this should fail
, writing the reason directly right above the command could be easier to understand later.
@@ -0,0 +1,75 @@ | |||
setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding these tests.
src/test/regress/bin/normalize.sed
Outdated
@@ -34,6 +34,9 @@ s/ keyval(1|2|ref)_[0-9]+ / keyval\1_xxxxxxx /g | |||
# shard table names for custom_aggregate_support | |||
s/ daily_uniques_[0-9]+ / daily_uniques_xxxxxxx /g | |||
|
|||
# shard table names for isolation_create_citus_local_table | |||
s/ "citus_local_table_[0-9]_[0-9]+" / "citus_local_table_[0-9]_xxxxxxx" /g |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if the first part is more than 1 digit? then would it make sense to use [0-9]+
in the first part as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it makes sense
*/ | ||
typedef enum ExtractForeignKeyConstrainstMode | ||
{ | ||
/* extract the foreign key OIDs where the table is the referencing one */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor:
extract
-> include
to be aligned with the variable name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm less inclined to make this change in this pr as the only related change done in this pr is to make its usage public
* create_citus_local_table executions would not error out for that relation. | ||
* Hence we need to error out for shard relations too. | ||
*/ | ||
ErrorIfRelationIsAKnownShard(relationId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we remove 'citus local table' from these messages, it might make sense to put these methods to a common place as non-static so that they can be used in other places as well.
|
||
|
||
/* | ||
* ErrorIfCoordinatorNotAddedAsWorkerNode error out if coordinator is not added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error -> errors
ereport(ERROR, (errmsg("couldn't find the coordinator node in the metadata " | ||
"as it is not added as a worker"))); | ||
} | ||
ErrorIfCoordinatorNotAddedAsWorkerNode(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For future reference, this line doesn't error out with the current tests.
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
(A detailed commit message will be added later)
…-cache-entry-rebased DESCRIPTION: Introduce citus local tables The commits in this pr are merged from other sub-pr's: * community/#3852: Brings lazy&fast table creation logic for create_citus_local_table udf * community/#3995: Brings extended utility command support for citus local tables * community/#4133: Brings changes in planner and in several places to integrate citus local tables into our distributed execution logic We are introducing citus local tables, which a new table type to citus. To be able to create a citus local table, first we need to add coordinator as a worker node. Then, we can create a citus local table via SELECT create_citus_local_table(<tableName>). Calling this udf from coordinator will actually create a single-shard table whose shard is on the coordinator. Also, from the citus metadata perspective, for citus local tables: * partitionMethod is set to DISTRIBUTE_BY_NONE (like reference tables) and * replicationModel is set to the current value of citus.replication_model, which already can't be equal to REPLICATION_MODEL_2PC, which is only used for reference tables internally. Note that currently we support creating citus local tables only from postgres tables living in the coordinator. That means, it is not allowed to execute this udf from worker nodes or it is not allowed to move shard of a citus local table to any other nodes. Also, run-time complexity of calling create_citus_local_table udf does not depend on the size of the relation, that means, creating citus local tables is actually a non-blocking operation. This is because, instead of copying the data to a new shard, this udf just does the following: * convert input postgres table to the single-shard of the citus local table by suffixing the shardId to it's name, constraints, indexes and triggers etc., * create a shell table for citus local table in coordinator and in mx-worker nodes when metadata sycn is enabled. * create necessary objects on shell table. Here, we should also note we can execute queries/dml's from mx worker nodes as citus local tables are already first class citus tables. Even more, we brought trigger support for citus local tables. That means, we can define triggers on citus local tables so that users can define trigger objects to perform execution of custom functions that might even modify other citus tables and other postgres tables. Other than trigger support, citus local tables can also be involved in foreign key relationships with reference tables. Here the only restriction is, foreign keys from reference tables to citus local tables cannot have behaviors other than RESTRICT & NO ACTION behavior. Other than that, foreign keys between citus local tables and reference tables just work fine. All in all, citus local tables are actually just local tables living in the coordinator, but natively accessible from other nodes like other first class citus tables and this enables us to set foreign keys constraints between very big coordinator tables and reference tables without having to do any data replication to worker nodes for local tables.
(Replaces #3729)
This pr is opened against
single-placement-table/master
branch, the main branch we are developing citus local tables. We will merge the things as a whole when the end-to-end implementation is complete.This pr implements lazy shard creation for citus local tables. That means:
shardId
.TODO:
(in another pr):
(pr is open: Bring deparse logic for policies from enterprise #3896, should add some tests for citus local tables here or in another pr againstsingle-placement-table/master
)(won't support policies on citus local tables in community)