Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CARBONDATA-3932] [CARBONDATA-3903] change discovery.uri in presto guide and dml document update #3969

Closed
wants to merge 1 commit into from

Conversation

ShreelekhyaG
Copy link
Contributor

@ShreelekhyaG ShreelekhyaG commented Oct 6, 2020

Why is this PR needed?

Few document changes in dml, presto-guide.

What changes were proposed in this PR?

change discovery.uri=<coordinator_ip>:8086 to discovery.uri=http://<coordinator_ip>:8086 in presto guide.
dml document update with delta file information.

Does this PR introduce any user interface change?

  • No

Is any new testcase added?

  • No

@CarbonDataQA1
Copy link

Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/4317/

@CarbonDataQA1
Copy link

Build Failed with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/2567/

@ShreelekhyaG ShreelekhyaG changed the title [WIP] document update [CARBONDATA-3932] [CARBONDATA-3903] document update Oct 7, 2020
@QiangCai
Copy link
Contributor

QiangCai commented Oct 7, 2020

better to change the PR title to the details that this PR does.

@ShreelekhyaG ShreelekhyaG changed the title [CARBONDATA-3932] [CARBONDATA-3903] document update [CARBONDATA-3932] [CARBONDATA-3903] change discovery.uri in presto guide and dml document update Oct 7, 2020
@CarbonDataQA1
Copy link

Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/2573/

@CarbonDataQA1
Copy link

Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/4323/

@@ -43,6 +43,7 @@ CarbonData DML statements are documented here,which includes:
**NOTE**:
* Use 'file://' prefix to indicate local input files path, but it just supports local mode.
* If run on cluster mode, please upload all input files to distributed file system, for example 'hdfs://' for hdfs.
* Each load creates new segment folder and manages the folder through tablestatus file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this content is already present in Segment Management. no need to add it. If needed can update the first line in Segment Management

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. removed.

@@ -303,6 +304,7 @@ CarbonData DML statements are documented here,which includes:
* The data type of source and destination table columns should be same
* INSERT INTO command does not support partial success if bad records are found, it will fail.
* Data cannot be loaded or updated in source table while insert from source table to target table is in progress.
* Each insert creates new segment folder and manages the folder through tablestatus file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

@@ -402,6 +404,11 @@ CarbonData DML statements are documented here,which includes:

## UPDATE AND DELETE

Since the data in CarbonData files is immutable, the updates and delete are done via maintaining two files namely:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can update the line based on filesystem. Like, Since the data stored in file system like HDFS is immutable,..

@CarbonDataQA1
Copy link

Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/4340/

@CarbonDataQA1
Copy link

Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/2590/

@@ -447,13 +452,17 @@ CarbonData DML statements are documented here,which includes:

### DELETE

This command allows us to delete records from CarbonData table.
This command allows us to delete records from CarbonData table. Without providing expression, it will delete all the records from table.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be added after syntax, as a note

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -402,6 +402,11 @@ CarbonData DML statements are documented here,which includes:

## UPDATE AND DELETE

Since the data stored in file system like HDFS is immutable, the update and delete in carbondata are done via maintaining two files namely:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Since the data stored in file system like HDFS is immutable, the update and delete in carbondata are done via maintaining two files namely:
Since the data stored in a file system like HDFS is immutable, the update and delete in carbondata are done via maintaining two files namely:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@CarbonDataQA1
Copy link

Build Success with Spark 2.4.5, Please check CI http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/2623/

@CarbonDataQA1
Copy link

Build Success with Spark 2.3.4, Please check CI http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/4373/

@asfgit asfgit closed this in bdc8c91 Oct 13, 2020
@Indhumathi27
Copy link
Contributor

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants