Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

db_bench crashing on large number of entries #10

Closed
cmumford opened this issue Sep 9, 2014 · 4 comments
Closed

db_bench crashing on large number of entries #10

cmumford opened this issue Sep 9, 2014 · 4 comments
Assignees

Comments

@cmumford
Copy link
Contributor

cmumford commented Sep 9, 2014

Original issue 4 created by teoryn on 2011-05-10T23:17:44.000Z:

I ran a modified db_bench with --num=10 to --num=10^9 (by multiples of 10) to test the scaling of leveldb.
My modifications statically link snappy into the posix version of leveldb; I've attached the diff.
At --num=10^8 and --num=10^9 problems start occuring. stat.txt which shows the output of all the runs.

10^8 was killed during the overwrite benchmark at 23300000 ops, but I was not able to reproduce the error using 'db_bench --num=100000000 --benchmarks=overwrite'.

10^9 generated the following error during fillrandom:
put error: IO error: /tmp/dbbench/006484.log: Too many open files
Running 'db_bench --num=1000000000 --benchmarks=fillrandom' reproduced the error.

Built (patched) r27 of leveldb and r35 of snappy with:
$ gcc --version
gcc (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5

@cmumford
Copy link
Contributor Author

cmumford commented Sep 9, 2014

Comment #1 originally posted by ashoemaker on 2011-06-24T09:41:46.000Z:

I cannot repro with leveldb/r33 and snappy/r43, though I can trigger it by decreasing my file descriptor limit. Is it possible that you had other files open when running db_bench? The default limit is 1024, and leveldb by default uses up to 1000. Did you try increasing your limit (ulimit -n 10000) or decreasing options.max_open_files?

@cmumford cmumford self-assigned this Sep 9, 2014
@cmumford
Copy link
Contributor Author

cmumford commented Sep 9, 2014

Comment #2 originally posted by teoryn on 2011-06-24T15:14:30.000Z:

This bug was fixed in leveldb/r28 by the merge of upstream release 1.2

@cmumford
Copy link
Contributor Author

cmumford commented Sep 9, 2014

Comment #3 originally posted by tfarina@chromium.org on 2011-06-24T15:18:56.000Z:

Per comment 2, I'm closing this. Thanks.

@cmumford
Copy link
Contributor Author

cmumford commented Sep 9, 2014

Comment #4 originally posted by mbherf on 2011-09-10T05:11:32.000Z:

For other OSes the soft limit can default to 256 (e.g., Solaris). Building a version that is aware of this limit is important for portability! Otherwise you have to run as root.

@cmumford cmumford closed this as completed Sep 9, 2014
isaachier pushed a commit to isaachier/leveldb that referenced this issue Dec 3, 2017
0ec2a34 Clean up compile-time warnings (gcc 7.1) (Matt Corallo)

Pull request description:

  * max_file_size was already a size_t, so return that.
  * ecx was somehow being listed as used-uninitialized

Tree-SHA512: 6aeb9d6ce343d27c00338a379e6f359a6591e06fda978204133b9f81d817d99d4e4fcb7c851cf366276ef0171cfe77fa5765d836014dd6f213653ac53420d121
maochongxin pushed a commit to maochongxin/leveldb that referenced this issue Jul 21, 2022
Ensure families is not nullptr before using it
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant