Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PROPOSAL]Support Dynamic Partition #2367

Closed
wants to merge 82 commits into from

Conversation

WingsGo
Copy link
Contributor

@WingsGo WingsGo commented Dec 3, 2019

For #2262 and #2317

  1. Support dynamic partition when create an olap table.
  2. Support modify dynamic partition properties.

WingsGo and others added 5 commits December 3, 2019 21:25
[STORAGE] [INDEX]

For apache#2061 and apache#2062

Add bitmap index reader
SegmentIterator support bitmap index
Add some metrics
The timeline for this question is as follows:

1. For some reason, the master have lost contact with the other two followers.
Judging from the logs of the master, for almost 40 seconds, the master did not print any logs.
It is suspected that it is stuck due to full gc or other reasons, causing the
other two followers to think that the master has been disconnected.

2. After the other two followers re-elected, they continued to provide services.

3. The master node is manually restarted afterwards. When restarting it for the first time,
it needs to rollback some committed logs, so it needs to be closed and restarted again.
After restarting again, it returns to normal.

The main reason is that the master got stuck for 40 seconds for some reason.
This issue requires further observation.

At the same time, in order to alleviate this problem, we decided to set bdbje's heartbeat timeout
as a configurable value. The default is 30 seconds. Can be configured to 1 minute,
try to avoid this problem first.
Pure DocValue optimization for doris-on-es

Future todo:
Today, for every tuple scan we check if pure_docvalue is enabled, this is not reasonable,  should check pure_docvalue enabled for one whole scan outside,  I will add this todo in future
The heartbeat config format should be like "30 s", not "30"
This CL is related to commit 261072ecdda7e8eb3ce685c557c6dab15488d1f3
Copy link
Contributor

@morningman morningman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more things:
Comments.
Every new classes and their main methods should has comment to explain what it is and how it work, at the beginning of class or methods

@@ -3573,6 +3592,11 @@ private Table createOlapTable(Database db, CreateTableStmt stmt, boolean isResto
PropertyAnalyzer.analyzeDataProperty(stmt.getProperties(), DataProperty.DEFAULT_HDD_DATA_PROPERTY);
PropertyAnalyzer.analyzeReplicationNum(properties, FeConstants.default_replication_num);

Map<String, String> dynamicPartitionProperties = DynamicPartitionUtil.analyzeDynamicPartition(db, olapTable, properties);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you call analyzeDynamicPartition before creating the partition.
And inside the analyzeDynamicPartition, you will add this table to the
DynamicPartitionScheduler.
But if creating partition failed, no one remove this table out of DynamicPartitionScheduler, which causing memory leak.

You should make sure that every is OK, then you can register this table to the DynamicPartitionScheduler. Or you need to find a way to clean it out.

Copy link
Contributor Author

@WingsGo WingsGo Dec 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you call analyzeDynamicPartition before creating the partition.
And inside the analyzeDynamicPartition, you will add this table to the
DynamicPartitionScheduler.
But if creating partition failed, no one remove this table out of DynamicPartitionScheduler, which causing memory leak.

You should make sure that every is OK, then you can register this table to the DynamicPartitionScheduler. Or you need to find a way to clean it out.

When DynamicPartitionScheduler begin to execute, dynamicAddPartition function will traverse all registed tables and remove invalid tables.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After you have executed analyzeDynamicPartition() and added the table to the DynamicPartitionScheduler, but before it was officially added to the database, the DynamicPartitionScheduler may starts, and it cannot find this table in database, so it will remove the table.

So better add it after the table is truly created.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After you have executed analyzeDynamicPartition() and added the table to the DynamicPartitionScheduler, but before it was officially added to the database, the DynamicPartitionScheduler may starts, and it cannot find this table in database, so it will remove the table.

So better add it after the table is truly created.

Done

for (Map.Entry<String, String> entry : properties.entrySet()) {
if (entry.getKey().startsWith(DYNAMIC_PARTITION_PROPERTY_PREFIX)) {
dynamicPartitionProperties.put(entry.getKey(), entry.getValue());
DynamicPartitionScheduler.setLastUpdateTime(TimeUtils.getCurrentFormatTime());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why updating the LastUpdateTime here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update LastUpdateTime after create dynamic partition table and modifyTableDynamicPartition

@@ -5074,6 +5113,31 @@ public void replayRenameColumn(TableInfo tableInfo) throws DdlException {
throw new DdlException("not implmented");
}

public void modifyTableDynamicPartition(Database db, OlapTable table, Map<String, String> properties) throws DdlException {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic of modify dynamic partition is too simple.
There should be more check to validate the request.
For example, do we allow user to change the prefix of dynamic partition's name? If yes, how to make sure that your partition checking logic in DynamicPartitionScheduler works well after change?
As I know, You are judging whether to add the partition based on whether the partition‘ name exists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I first want to judge whether there exist partition which range is same as add partition, but I am not sure how to judge the two partition range, I will check it later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic of modify dynamic partition is too simple.
There should be more check to validate the request.
For example, do we allow user to change the prefix of dynamic partition's name? If yes, how to make sure that your partition checking logic in DynamicPartitionScheduler works well after change?
As I know, You are judging whether to add the partition based on whether the partition‘ name exists.

I use another way to judge whether the partition exists, but I am not sure if it is useful, because I am not family with code about partition.

wangcong18 and others added 22 commits December 4, 2019 17:09
For apache#2383 
1. Limit the concurrent transactions of routine load job
2. Create new routine load task when txn is VISIBLE, not after COMMITTED.

For apache#2267 
1. All non-master daemon thread should also be started after catalog is ready.

For apache#2354 
1. `fixLoadJobMetaError()` should be called after all meta data is read, including image and edit logs.
2. Mini load job should set to CANCELLED when corresponding transaction is not found, instead
of UNKNOWN.
…en-and predicate (apache#2360)

This commit add a new plan node named AssertNumRowsNode
which is used to determine whether the number of rows exceeds the limit.
The subquery in Binary predicate and Between-and predicate should be added a AssertNumRowsNode
which is used to determine whether the number of rows in subquery is more than 1.
If the number of rows in subquery is more than 1, the query will be cancelled.

For example:
There are 4 rows in table t1.
Query: select c1 from t1 where c1=(select c2 from t1);
Result: ERROR 1064 (HY000): Expected no more than 1 to be returned by expression select c2 from t1

ISSUE-2270
TPC-DS 6,54,58
fix arithmetic operation between numeric and non-numeric will cause unexpected value.
After this patch you will get
mysql> select 1 +  "kks";
+-----------+
| 1 + 'kks' |
+-----------+
|         1 |
+-----------+
1 row in set (0.02 sec)

mysql> select 1 -  "kks";
+-----------+
| 1 - 'kks' |
+-----------+
|         1 |
+-----------+
1 row in set (0.01 sec)
maven 3.6.0 download URL is outdated, update it to 3.6.3
…pache#2380)

Features:
1、Support WHERE/ORDER BY/LIMIT
2、Columns:TableName、CreatTime、FinishTime、State
3、Only “And” between conditions
4、TableName and State column only support "=" operator
5、CreateTime and FinishTime column support “=”,“>=”,"<=",">","<","!=" operators
6、CreateTime and FinishTime column support Date and DateTime string, eg:"2019-12-04" or "2019-12-04 17:18:00"

TestCase:
MySQL [haibotest]> show alter table column where State='FINISHED' and CreateTime > '2019-12-03' order by FinishTime desc limit 0,2;
+-------+---------------+---------------------+---------------------+---------------+---------+---------------+---------------+---------------+----------+------+----------+---------+
| JobId | TableName | CreateTime | FinishTime | IndexName | IndexId | OriginIndexId | SchemaVersion | TransactionId | State | Msg | Progress | Timeout |
+-------+---------------+---------------------+---------------------+---------------+---------+---------------+---------------+---------------+----------+------+----------+---------+
| 11134 | test_schema_2 | 2019-12-03 19:21:42 | 2019-12-03 19:22:11 | test_schema_2 | 11135 | 11059 | 1:192010000 | 3 | FINISHED | | N/A | 86400 |
| 11096 | test_schema_3 | 2019-12-03 19:21:31 | 2019-12-03 19:21:51 | test_schema_3 | 11097 | 11018 | 1:2063361382 | 2 | FINISHED | | N/A | 86400 |
+-------+---------------+---------------------+---------------------+---------------+---------+---------------+---------------+---------------+----------+------+----------+---------+
2 rows in set (0.00 sec)
The problem with the current implementation is that all data to be
inserted will be counted in memory, but for the aggregation model or
some other special cases, not all data will be inserted into `MemTable`,
and these data should not be counted in memory.

This change makes the `SkipList` use the exclusive `MemPool`,
and only the data will be inserted into the `SkipList` can use this
`MemPool`. In other words, those discarded rows will not be
counted by the `MemPool` of` SkipList`.

In order to avoid duplicate checking whether a row already exists in
`SkipList`, this change also modifies the `SkipList` interface(A `Hint`
will be fetched when `Find()`, and then use it in `InsertUseHint()`),
and made `SkipList` no longer aware of the aggregation logic.

At present, because of the data row(`Tuple`) generated by the upper layer
is different from the data row(`Row`) internally represented by the
engine, when inserting `MemTable`, the data row must be copied.
If the row needs to be inserted into SkipList, we need copy it again
to `MemPool` of `SkipList`.

And, at present, the aggregation function only supports `MemPool` when
copying, so even if the data will not be inserted into` SkipList`,
`MemPool` is still used (in the future, it can be replaced with an
ordinary` Buffer`). However, we reuse the allocated memory in MemPool,
that is, we do not reallocate new memory every time.

Note: Due to the characteristics of `MemPool` (once inserted, it cannot
be partially cleared), the following scenarios may still cause multiple
flushes. For example, the aggregation model of a string column is `MAX`,
and the data inserted at the same time is in ascending order, then for
each data row, it must apply for memory from `MemPool` in `SkipList`,
that is, although the old rows in SkipList` will be discarded,
the memory occupied will still be counted.

I did a test on my development machine using `STREAM LOAD`: a table with
only one tablet and all columns are keys, the original data was
1.1G (9318799 rows), and there were 377745 rows after removing duplicates.

It can be found that both the number of files and the query efficiency are
greatly improved, the price paid is only a slight increase in load time.

before:
```
  $ ll storage/data/0/10019/1075020655/
  total 4540
  -rw------- 1 dev dev 393152 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_0_0.idx
  -rw------- 1 dev dev 421660 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_10_0.idx
  -rw------- 1 dev dev 184214 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.dat
  -rw------- 1 dev dev    610 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_1_0.idx
  -rw------- 1 dev dev 329181 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.dat
  -rw------- 1 dev dev    935 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_11_0.idx
  -rw------- 1 dev dev 343813 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.dat
  -rw------- 1 dev dev    985 Dec  2 18:43 0200000000000004f5404b740288294b21e52b0786adf3be_12_0.idx
  -rw------- 1 dev dev 315364 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.dat
  -rw------- 1 dev dev    885 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_2_0.idx
  -rw------- 1 dev dev 423806 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.dat
  -rw------- 1 dev dev   1185 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_3_0.idx
  -rw------- 1 dev dev 294811 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.dat
  -rw------- 1 dev dev    835 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_4_0.idx
  -rw------- 1 dev dev 403241 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.dat
  -rw------- 1 dev dev   1135 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_5_0.idx
  -rw------- 1 dev dev 350753 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.dat
  -rw------- 1 dev dev    860 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_6_0.idx
  -rw------- 1 dev dev 266966 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.dat
  -rw------- 1 dev dev    735 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_7_0.idx
  -rw------- 1 dev dev 451191 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.dat
  -rw------- 1 dev dev   1235 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_8_0.idx
  -rw------- 1 dev dev 398439 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.dat
  -rw------- 1 dev dev   1110 Dec  2 18:42 0200000000000004f5404b740288294b21e52b0786adf3be_9_0.idx

  {
    "TxnId": 16,
    "Label": "cd9f8392-dfa0-4626-8034-22f7cb97044c",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 46907
  }

  mysql> select count(*) from xxx_before;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
1 row in set (0.91 sec)

```

aftr:
```
  $ ll storage/data/0/10013/1075020655/
  total 3612
  -rw------- 1 dev dev 3328992 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.dat
  -rw------- 1 dev dev    8460 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_0_0.idx
  -rw------- 1 dev dev  350576 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.dat
  -rw------- 1 dev dev     985 Dec  2 18:26 0200000000000003d44e5cc72626f95a0b196b52a05c0f8a_1_0.idx

  {
    "TxnId": 12,
    "Label": "88f606d5-8095-4f15-b61d-49b7080c16b8",
    "Status": "Success",
    "Message": "OK",
    "NumberTotalRows": 9318799,
    "NumberLoadedRows": 9318799,
    "NumberFilteredRows": 0,
    "NumberUnselectedRows": 0,
    "LoadBytes": 1079581477,
    "LoadTimeMs": 48771
  }

  mysql> select count(*) from xxx_after;
  +----------+
  | count(*) |
  +----------+
  |   377745 |
  +----------+
  1 row in set (0.38 sec)

```
**Authorization checking logic**

There are some problems with the current password and permission checking logic. For example:
First, we create a user by:
`create user cmy@"%" identified by "12345";`

And then 'cmy' can login with password '12345' from any hosts.

Second, we create another user by:
`create user cmy@"192.168.%" identified by "abcde";`

Because "192.168.%" has a higher priority in the permission table than "%". So when "cmy" try
to login in by password "12345" from host "192.168.1.1", it should match the second permission
entry, and will be rejected because of invalid password.
But in current implementation, Doris will continue to check password on first entry, than let it pass. So we should change it.

**Permission checking logic**

After a user login, it should has a unique identity which is got from permission table. For example,
when "cmy" from host "192.168.1.1" login, it's identity should be `cmy@"192.168.%"`. And Doris
should use this identity to check other permission, not by using the user's real identity, which is
`cmy@"192.168.1.1"`.

**Black list**
Functionally speaking, Doris only support adding WHITE LIST, which is to allow user to login from
those hosts in the white list. But is some cases, we do need a BLACK LIST function.
Fortunately, by changing the logic described above, we can simulate the effect of the BLACK LIST.

For example, First we add a user by:
`create user cmy@'%' identified by '12345';`

And now user 'cmy' can login from any hosts. and if we don't want 'cmy' to login from host A, we
can add a new user by:
`create user cmy@'A' identified by 'other_passwd';`

Because "A" has a higher priority in the permission table than "%". If 'cmy' try to login from A using password '12345', it will be rejected.
All classes that implement the Wriable interface need only implement the write() method.
The read() method should be implemented by itself according to the situation of different
classes.
…ut (apache#2405)

[Load] 

When performing a long-time load job, the following errors may occur. Causes the load to fail.

load channel manager add batch with unknown load id: xxx

There is a case of this error because Doris opened an unrelated channel during the load
process. This channel will not receive any data during the entire load process. Therefore,
after a fixed timeout, the channel will be released.

And after the entire load job is completed, it will try to close all open channels. When it try to
close this channel, it will find that the channel no longer exists and an error is reported.

This CL will pass the timeout of load job to the load channel, so that the timeout of load channels
will be same as load job's.
fe/src/main/cup/sql_parser.cup Show resolved Hide resolved
if (tableProperty != null &&
tableProperty.getDynamicPartitionProperty().isExist() &&
Boolean.parseBoolean(tableProperty.getDynamicPartitionProperty().getEnable())) {
throw new DdlException("Cannot modify partition on a Dynamic Partition Table, set `dynamic_partition.enable` to false firstly.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not support drop partition in a dynamic partition table?
I think it will be OK to allow adding and droping partition to a dynamic partition table.
Dynamic partition only helps user to create or drop partition automatically, if partition has been created manually, dynamic partition scheduler just skip it and continue to handle other table.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think when user want to add or drop a partition to a dynamic partition table, they should kown what they are doing, and make sure this is a dynamic partition table.If user make sure he wants to adding or droping partition to a dynamic partition table, they can stop dynamic partition feature and I have give the tips in error message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is OK that don't allow user to create partition for dynamic partition table. However I'm not very clear about the specific hazards of allowing users to create partitions. If the harm is not so serious, can we release this restriction?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, the harm is not so serious, I just want to make sure that user don't forget that he is adding or droping on a dynamic partition table, unless it is necessary.Release this restriction is ok for me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is OK that don't allow user to create partition for dynamic partition table. However I'm not very clear about the specific hazards of allowing users to create partitions. If the harm is not so serious, can we release this restriction?

Forbidding user to modify partition is just for simplify our logic. Otherwise there may be a large number of validation checks, concurrent conflict checks, and easy to cause errors.


public void replayModifyTableDynamicPartition(DynamicPartitionInfo info) {
long dbId = info.getDbId();
long tableId = info.getTableId();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the dbId and tableId are only used here, there is no need to keep it in DynamicPartitionInfo. Because you can log dbId, tableId and DynamicPartitionInfo separated. Or you can implement an struct ModifyTalbeDynamicPartitionJournal which will contain dbId and tableId.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused why can't hold these two ids, and what's the meaning of implement a strcut ModifyTableDynamicPartitionJounal, create a class like JournalEntity?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should ensure that a class contains only necessary information and avoid circular dependency. For example, in this example, unnecessary DB information is coupled in the table information, which will limit the expansion of future functions. Suppose we support the function of moving tables between dB in the future. If there is no record of dbid here, only the information of DB level needs to be modified. If the dbid is recorded here, you need to change the information here, which is completely unnecessary.

Modifytabledynamicpartitionjournal is used to write a log. It is emphasized here that the information recorded in the log is not necessarily the same data structure as the structure existing in the memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you mean that I have no need to hold db in modifyDynamicPartition, I can manipulate the table directly,but I don't konw how to get a table without get it's db first.If I just log table like editLog.logDynamicPartition(olapTable), it violate the meaning of your last sentence, so I think the point is if I just record the table info and modified properties, how can I get the table without get db first, so that I can replay modify properties in the table.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I want to say is that you log

editLog.logDynamicPartition(dbId, tableId, dynamicPartitionInfo)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WingsGo Hi, I think @imay misunderstood the meaning of DynamicPartitionInfo. DynamicPartitionInfo here is just a kind of edit log, which is used to store all information about this "modify" operation, including dbId and tblId.

So please change it back to what you implement before, putting the dbId and tblId inside the
DynamicPartitionInfo, and log as a whole:

editLog.logDynamicPartition(dynamicPartitionInfo)

Maybe you can change its class name to "ModifyDynamicPartitionInfo", to make it more like a kind of edit log class.

Copy link
Contributor Author

@WingsGo WingsGo Dec 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@morningman I have reverted and renamed "DynamicPartitionInfo" to "ModifyDynamicPartitionInfo"

lichaoyong and others added 19 commits December 13, 2019 14:16
Change default docker image from build-env to build-env-1.1
1 Because we don't support array type currently, so I use variable arguments instead.

2 intersect_count directly return final count, not bitmap like bitmap_union, because intersect_count return bitmap is more complex and need more serialize. If we really need bitmap format from intersect_count, we could do that in another PR and which won't have compatibility problems.
[Tag System] 
This CL includes 2 parts:

    Add classes related to "tag"
        Resource: is the collective name of the nodes that provide various service capabilities in Doris cluster.
        Tag: A Tag consists of type and name.
        TagSet: TagSet represents a set of tags.
        TagManager: maintains 2 indexes:
        one is from tag to resource.
        one is from resource to tags

    ISSUE apache#1723

    Using JSON as serialization methods of metadata

    Introduce GSON library to serialize the new classes mentioned above.

    ISSUE apache#2415 apache#2389

GSON's version is updated to 2.8.6
@WingsGo WingsGo closed this Dec 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet