Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v3.0 nebula-storaged memory increase #4686

Closed
lopn opened this issue Sep 28, 2022 · 7 comments
Closed

v3.0 nebula-storaged memory increase #4686

lopn opened this issue Sep 28, 2022 · 7 comments

Comments

@lopn
Copy link

lopn commented Sep 28, 2022

nebula-storaged 随着时间的推移 占用的内存越来越多, 只能重启,大家有什么解决方案吗?

https://discuss.nebula-graph.com.cn/t/topic/10294

@lopn
Copy link
Author

lopn commented Sep 28, 2022

storage 配置

########## basics ##########
# Whether to run as a daemon process
--daemonize=true
# The file to host the process id
--pid_file=pids/nebula-storaged.pid
# Whether to use the configuration obtained from the configuration file
--local_config=true

########## logging ##########
# The directory to host logging files
--log_dir=logs
# Log level, 0, 1, 2, 3 for INFO, WARNING, ERROR, FATAL respectively
--minloglevel=0
# Verbose log level, 1, 2, 3, 4, the higher of the level, the more verbose of the logging
--v=0
# Maximum seconds to buffer the log messages
--logbufsecs=0
# Whether to redirect stdout and stderr to separate output files
--redirect_stdout=true
# Destination filename of stdout and stderr, which will also reside in log_dir.
--stdout_log_file=storaged-stdout.log
--stderr_log_file=storaged-stderr.log
# Copy log messages at or above this level to stderr in addition to logfiles. The numbers of severity levels INFO, WARNING, ERROR, and FATAL are 0, 1, 2, and 3, respectively.
--stderrthreshold=2
# Wether logging files' name contain time stamp.
--timestamp_in_logfile_name=true

########## networking ##########
# Comma separated Meta server addresses
--meta_server_addrs=127.0.0.1:9559
# Local IP used to identify the nebula-storaged process.
# Change it to an address other than loopback if the service is distributed or
# will be accessed remotely.
--local_ip=127.0.0.1
# Storage daemon listening port
--port=9779
# HTTP service ip
--ws_ip=0.0.0.0
# HTTP service port
--ws_http_port=19779
# HTTP2 service port
--ws_h2_port=19780
# heartbeat with meta service
--heartbeat_interval_secs=10

######### Raft #########
# Raft election timeout
--raft_heartbeat_interval_secs=30
# RPC timeout for raft client (ms)
--raft_rpc_timeout_ms=500
## recycle Raft WAL
--wal_ttl=14400

########## Disk ##########
# Root data path. Split by comma. e.g. --data_path=/disk1/path1/,/disk2/path2/
# One path per Rocksdb instance.
--data_path=/data/nebula_data/storage

# Minimum reserved bytes of each data path
--minimum_reserved_bytes=268435456

# The default reserved bytes for one batch operation
--rocksdb_batch_size=4096
# The default block cache size used in BlockBasedTable.
# The unit is MB.
--rocksdb_block_cache=4
# The type of storage engine, `rocksdb', `memory', etc.
--engine_type=rocksdb

# Compression algorithm, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
# For the sake of binary compatibility, the default value is snappy.
# Recommend to use:
#   * lz4 to gain more CPU performance, with the same compression ratio with snappy
#   * zstd to occupy less disk space
#   * lz4hc for the read-heavy write-light scenario
--rocksdb_compression=lz4

# Set different compressions for different levels
# For example, if --rocksdb_compression is snappy,
# "no:no:lz4:lz4::zstd" is identical to "no:no:lz4:lz4:snappy:zstd:snappy"
# In order to disable compression for level 0/1, set it to "no:no"
--rocksdb_compression_per_level=

# Whether or not to enable rocksdb's statistics, disabled by default
--enable_rocksdb_statistics=false

# Statslevel used by rocksdb to collection statistics, optional values are
#   * kExceptHistogramOrTimers, disable timer stats, and skip histogram stats
#   * kExceptTimers, Skip timer stats
#   * kExceptDetailedTimers, Collect all stats except time inside mutex lock AND time spent on compression.
#   * kExceptTimeForMutex, Collect all stats except the counters requiring to get time inside the mutex lock.
#   * kAll, Collect all stats
--rocksdb_stats_level=kExceptHistogramOrTimers

# Whether or not to enable rocksdb's prefix bloom filter, enabled by default.
--enable_rocksdb_prefix_filtering=true
# Whether or not to enable rocksdb's whole key bloom filter, disabled by default.
--enable_rocksdb_whole_key_filtering=false

############## Key-Value separation ##############
# Whether or not to enable BlobDB (RocksDB key-value separation support)
--rocksdb_enable_kv_separation=false
# RocksDB key value separation threshold. Values at or above this threshold will be written to blob files during flush or compaction.
--rocksdb_kv_separation_threshold=0
# Compression algorithm for blobs, options: no,snappy,lz4,lz4hc,zlib,bzip2,zstd
--rocksdb_blob_compression=lz4
# Whether to garbage collect blobs during compaction
--rocksdb_enable_blob_garbage_collection=true

############## rocksdb Options ##############
--rocksdb_db_options={"max_open_files":"50000"}
--rocksdb_block_based_table_options={"block_size":"32768","cache_index_and_filter_blocks":"true"}


# rocksdb DBOptions in json, each name and value of option is a string, given as "option_name":"option_value" separated by comma
#--rocksdb_db_options={}
# rocksdb ColumnFamilyOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
--rocksdb_column_family_options={"write_buffer_size":"67108864","max_write_buffer_number":"4","max_bytes_for_level_base":"268435456"}
# rocksdb BlockBasedTableOptions in json, each name and value of option is string, given as "option_name":"option_value" separated by comma
#--rocksdb_block_based_table_options={"block_size":"8192"}

@lopn
Copy link
Author

lopn commented Sep 28, 2022

5434 root      20   0 2763560   1.4g   3620 S   2.3  9.1 123:13.45 nebula-storaged 
5434 root      20   0 3713832   2.1g   3900 S  14.4 13.7 179:51.52 nebula-storaged 
5434 root      20   0 4082472   2.5g   4448 S   0.7 16.3 204:38.37 nebula-storaged


9.23 1.4G
9.26 2.1g
9.28 2.5g

@lopn
Copy link
Author

lopn commented Sep 28, 2022

nebula 版本:3.0.0
部署方式:单机版
是否为线上版本: Y
CPU、内存信息 8核16G

@lopn
Copy link
Author

lopn commented Sep 28, 2022

再过20几天估计 又需要 重启服务 了

@critical27
Copy link
Contributor

critical27 commented Sep 28, 2022

  1. lsof可以看看开了多少文件句柄,鉴于内存只有16G可以把max_open_files调小点
  2. 可以用下jeprof看看分配的内存都用来干啥了
  3. 再有就是如果你space比较多 rocksdb的memtable相关参数都得往小调

@Sophie-Xie Sophie-Xie changed the title nebula3.0 内存占用(nebula-storaged 占用内存越来越高) v3.0 nebula-storaged memory increase Sep 28, 2022
@porscheme
Copy link

porscheme commented Oct 4, 2022

Just curious why did you set max_open_files?

############## rocksdb Options ##############
--rocksdb_db_options={"max_open_files":"50000"}

fyi...Nebula v3.2.0 does not even set this option.

@Sophie-Xie
Copy link
Contributor

If there is new information, open it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants