Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add gflags to control max_edge_returned_per_vertex #1221

Merged
merged 7 commits into from
Nov 8, 2019
Merged

add gflags to control max_edge_returned_per_vertex #1221

merged 7 commits into from
Nov 8, 2019

Conversation

liuyu85cn
Copy link
Contributor

No description provided.

@liuyu85cn liuyu85cn added the ready-for-testing PR: ready for the CI test label Nov 7, 2019
@nebula-community-bot
Copy link
Member

Unit testing failed.

@nebula-community-bot
Copy link
Member

Unit testing passed.

@@ -225,7 +225,9 @@ TEST(QueryBoundTest, OutBoundSimpleTest) {
checkResponse(resp, 30, 12, 10001, 7, true);
}

TEST(QueryBoundTest, inBoundSimpleTest) {
TEST(QueryBoundTest, MaxEdgesReturenedTest) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't change the original UT, add a new one.

@@ -8,6 +8,7 @@

DEFINE_int32(max_handlers_per_req, 10, "The max handlers used to handle one request");
DEFINE_int32(min_vertices_per_bucket, 3, "The min vertices number in one bucket");
DEFINE_int32(max_edge_returned_per_vertex, 1000, "The max edge number returnred searching vertex");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value should be a bigger one, I'd like it is the maximum of int32. That means we will not cut-off anything by default. And users would config it with an appropriate value in config-file, such as 1000 or 5000, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx, change to INT_MAX

@nebula-community-bot
Copy link
Member

Unit testing passed.

@nebula-community-bot
Copy link
Member

Unit testing passed.

Copy link
Contributor

@critical27 critical27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nebula-community-bot
Copy link
Member

Unit testing failed.

@dangleptr
Copy link
Contributor

Jenkins go

@nebula-community-bot
Copy link
Member

Unit testing passed.

@dangleptr dangleptr merged commit ba0063f into vesoft-inc:master Nov 8, 2019
whitewum pushed a commit to whitewum/nebula that referenced this pull request Nov 11, 2019
* add gflags to control max edge returned from one vertex

* rename gflags

* fix logic error, from max vertice to max edge

* while add new UT, replace an old one by mistake... fix it

* change default max edge to INT_MAX
whitewum added a commit to whitewum/nebula that referenced this pull request Nov 11, 2019
…og-doc

* 'glog-doc' of https://github.com/whitewum/nebula:
  add gflags to control max_edge_returned_per_vertex (vesoft-inc#1221)
  Fix failed SchemaTest (vesoft-inc#1242)
  Support to create hard link for current WAL (vesoft-inc#1227)
  Clear the code associated with AddHosts/RemoveHosts (vesoft-inc#1172)
  add glog chs
tong-hao pushed a commit to tong-hao/nebula that referenced this pull request Jun 1, 2021
* add gflags to control max edge returned from one vertex

* rename gflags

* fix logic error, from max vertice to max edge

* while add new UT, replace an old one by mistake... fix it

* change default max edge to INT_MAX
yixinglu pushed a commit to yixinglu/nebula that referenced this pull request Jan 31, 2023
* doc: add users and cases, optimize format

* doc: add users and cases, optimize format for Chinese README

Co-authored-by: TommyLemon <tommy.zhou@vesoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-testing PR: ready for the CI test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants