Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limited to approx 2^31 entries? #58

Closed
GoogleCodeExporter opened this issue Aug 14, 2015 · 20 comments
Closed

Limited to approx 2^31 entries? #58

GoogleCodeExporter opened this issue Aug 14, 2015 · 20 comments

Comments

@GoogleCodeExporter
Copy link

What steps will reproduce the problem?

Fill table with entries--table full error at approx 2^31 entries.


What version of the product are you using? On what operating system?

sparsehash-1.8.1

Mac OS X 10.6


Please provide any additional information below.

All was well with the table occupying approx 96GB RAM when I hit the table full 
limit of approx 2^31 entries.  I'm curious whether there's any plan to lift 
this nominal 32-bit limit.  (Yes, it did occur to me that I could get around 
the limit by using multiple tables.)

Regardless, kudos to the developers, it's a real champ--I paired it with 
Murmurhash.

Original issue reported on code.google.com by keidavi...@gmail.com on 20 Sep 2010 at 11:05

@GoogleCodeExporter
Copy link
Author

Oops, that was x86_64 GNU/Linux, not Mac OS X.

Original comment by keidavi...@gmail.com on 20 Sep 2010 at 11:08

@GoogleCodeExporter
Copy link
Author

What "table" are you talking about, exactly?  I'm guessing this is one of the 
sparsehash classes.  Also, what error message are you seeing, exactly?  Can you 
give a minimal program that reproduces the error?

I don't know of any inherent limitations in sparsehash of 2^31 -- we should 
just use the allocator's size_type for everything.  I believe the default 
allocator uses size_t as its size_type.  Is that 2^31 on your system, perhaps?

Original comment by csilv...@gmail.com on 20 Sep 2010 at 11:21

  • Added labels: Priority-Medium, Type-Defect

@GoogleCodeExporter
Copy link
Author

Hello Craig,

This is sparse_hash_set.

It's a vanilla system:  4x Quad-Core AMD Opteron(tm) Processor 8356, 128GB
RAM,

Linux version 2.6.18-194.3.1.el5 (mockbuild@builder10.centos.org) (gcc
version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Thu May 13 13:08:30 EDT
2010

though I'm using g++ 4.4.0 to get the -std=c++0x option.

htlimit3: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for
GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9,
not stripped

Linux ccs7int.lanl.gov 2.6.18-194.3.1.el5 #1 SMP Thu May 13 13:08:30 EDT
2010 x86_64 x86_64 x86_64 GNU/Linux




The output of the attached program is

sizeof(int) = 4
sizeof(long int) = 8
sizeof(long long int) = 8
sizeof(size_t) = 8
sizeof(void *) = 8
sizeof(size_t) = 8
size of set1 is 100000000 at 4761904.761905/sec
size of set1 is 200000000 at 5000000.000000/sec
size of set1 is 300000000 at 3333333.333333/sec
size of set1 is 400000000 at 8333333.333333/sec
size of set1 is 500000000 at 2083333.333333/sec
size of set1 is 600000000 at 9090909.090909/sec
size of set1 is 700000000 at 9090909.090909/sec
size of set1 is 800000000 at 8333333.333333/sec
size of set1 is 900000000 at 1190476.190476/sec
size of set1 is 1000000000 at 8333333.333333/sec
size of set1 is 1100000000 at 9090909.090909/sec
size of set1 is 1200000000 at 9090909.090909/sec
size of set1 is 1300000000 at 8333333.333333/sec
size of set1 is 1400000000 at 9090909.090909/sec
size of set1 is 1500000000 at 9090909.090909/sec
size of set1 is 1600000000 at 8333333.333333/sec
size of set1 is 1700000000 at 9090909.090909/sec
size of set1 is 1800000000 at 9090909.090909/sec
size of set1 is 1900000000 at 8333333.333333/sec
size of set1 is 2000000000 at 9090909.090909/sec
size of set1 is 2100000000 at 9090909.090909/sec
htlimit3: /usr/local/include/google/sparsehash/sparsehashtable.h:845:
std::pair<typename Alloc::rebind<Value>::other::size_type, typename
Alloc::rebind<Value>::other::size_type> google::sparse_hashtable<Value, Key,
HashFcn, ExtractKey, SetKey, EqualKey, Alloc>::find_position(const Key&)
const [with Value = long unsigned int, Key = long unsigned int, HashFcn =
std::hash<long unsigned int>, ExtractKey = google::sparse_hash_set<long
unsigned int, std::hash<long unsigned int>, eqcomp,
google::libc_allocator_with_realloc<long unsigned int> >::Identity, SetKey =
google::sparse_hash_set<long unsigned int, std::hash<long unsigned int>,
eqcomp, google::libc_allocator_with_realloc<long unsigned int> >::SetKey,
EqualKey = eqcomp, Alloc = google::libc_allocator_with_realloc<long unsigned
int>]: Assertion `num_probes < bucket_count() && "Hashtable is full: an
error in key_equal<> or hash<>"' failed.
Aborted

Original comment by keidavi...@gmail.com on 21 Sep 2010 at 4:38

@GoogleCodeExporter
Copy link
Author

I didn't see an attached program -- can you try attaching again?

} with Value = long unsigned int, Key = long unsigned int

OK, looks like you're using sparse_hash_map<unsigned long, unsigned long>, and 
the standard libc_allocator.  libc_allocator definitely uses size_t as 
size_type.  I don't see any code in sparsehashtable or sparsetable that uses 
int.  Very mysterious.

Unfortunately, I don't have machines with enough memory that I can easily try 
to debug this myself.  Can you augment your test program in the following ways?
   print sizeof(sparse_hash_map<unsigned long, unsigned long>::size_type);
   for each iteration, print not only setl.count(), but setl.bucket_count()

Original comment by csilv...@gmail.com on 21 Sep 2010 at 6:36

@GoogleCodeExporter
Copy link
Author

Hello,

Last I tried Gmail didn't like executable attachments.

Momentarily you should receive email from transfer.lanl.gov that will
contain links to download the program modified as you describe below, and
its output.  Please let me know if you don't receive this.

Anyway, it looks like you're on to something, bucket_count() not bumping up.

Thanks.

Kei

Original comment by keidavi...@gmail.com on 21 Sep 2010 at 8:15

@GoogleCodeExporter
Copy link
Author

Something else:  you have a concept of

  set1.max_bucket_count() = 2305843009213693951

which is for 64-bit size_type, (2^64 - 1)/(bit size of hashed data), as far
as I can tell.  Fair enough.

But then elsewhere, in bool resize_delta(size_type delta) { }, you use
numeric_limits<size_type> in a couple of places, which would seem to be
overly generous.  (But what do I know--I'm just poking around.)

    if (resize_to < needed_size &&    // may double resize_to
       resize_to < (STL_NAMESPACE::numeric_limits<size_type>::max)() / 2) {

Original comment by keidavi...@gmail.com on 21 Sep 2010 at 9:45

@GoogleCodeExporter
Copy link
Author

You're right, it would be smarter to use max_bucket_count there.  I'll look 
into it for the next release.

Original comment by csilv...@gmail.com on 21 Sep 2010 at 10:09

@GoogleCodeExporter
Copy link
Author

Same code I sent you before but with the addition of

  set1.set_resizing_parameters(0.0, 1.0);

before starting inserting.  Runs to completion of the (arbitrary) 8.0x10^9
insertions.

I tried this experiment because I thought there might be something funny
with the logic--"grow unless growing would make it a candidate for
shrinking," or "float * int" == "float * (int + 1)" when int gets big
enough, and float is close to 1.0, dunno, just wild guesses.   Also
get_resizing_parameters() is apparently not exposed to the user.

I don't have any more time to try to debug this myself, but the successful
run suggests that it's easily fixable.


$ ./htlimit3
sizeof(int) = 4
sizeof(long int) = 8
sizeof(long long int) = 8
sizeof(size_t) = 8
sizeof(void *) = 8
18446744073709551615
sizeof(sparse_hash_set<size_t, hash<size_t>, eqcomp>::size_type) = 8
set1.size() = 100000000, set1.bucket_count() = 134217728,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 200000000, set1.bucket_count() = 268435456,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 300000000, set1.bucket_count() = 536870912,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 400000000, set1.bucket_count() = 536870912,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 500000000, set1.bucket_count() = 536870912,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 600000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 700000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 800000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 900000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1000000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1100000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1200000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1300000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1400000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1500000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1600000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1700000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1800000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 1900000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2000000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2100000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2200000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2300000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2400000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2500000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2600000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2700000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2800000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 2900000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3000000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3100000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3200000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3300000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3400000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3500000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3600000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3700000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3800000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 3900000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4000000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4100000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4200000000, set1.bucket_count() = 4294967296,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4300000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4400000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4500000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4600000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4700000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4800000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 4900000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5000000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5100000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5200000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5300000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5400000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5500000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5600000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5700000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5800000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 5900000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6000000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6100000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6200000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6300000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6400000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6500000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6600000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6700000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6800000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 6900000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7000000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7100000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7200000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7300000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7400000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7500000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7600000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7700000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7800000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 7900000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951
set1.size() = 8000000000, set1.bucket_count() = 8589934592,
set1.max_bucket_count() = 2305843009213693951

Original comment by keidavi...@gmail.com on 21 Sep 2010 at 11:59

@GoogleCodeExporter
Copy link
Author

Rhetorical question:  are the initial values of shrink and grow factors
inverses, i.e. shrink*grow = 1?

Original comment by keidavi...@gmail.com on 22 Sep 2010 at 12:11

@GoogleCodeExporter
Copy link
Author

No, shrink * grow needn't = 1 (and typically won't).

Also, set_resizing_factor is deprecated -- the new API (taken from tr1) is 
max_load_factor() and min_load_factor().  You can use those to both get and set 
the two resizing parameters.

Can you paste the full output of your program's run?  You cut off at
} set1.size() = 8000000000, set1.bucket_count() = 8589934592,

I'd need to see the full output to try to debug the problem.

Original comment by csilv...@gmail.com on 22 Sep 2010 at 3:22

@GoogleCodeExporter
Copy link
Author

That was the whole output.  I only tried to insert 8*10^9 entries.  The
problem was happening at 1.1*10*9 entries.

Here's some more info.  I instrumented resize_delta(), see below, ran the
same program I sent you by transfer.lanl.gov, and got the following.  Note
that enlarge_threshold() blows up suddenly to nearly 2^64, so it's never
going to enlarge the table again.  I'd think it'd be straightforward to
track down now.

sizeof(int) = 4
sizeof(long int) = 8
sizeof(long long int) = 8
sizeof(size_t) = 8
sizeof(void *) = 8
(STL_NAMESPACE::numeric_limits<size_t>::max)() = 18446744073709551615
sizeof(sparse_hash_set<size_t, hash<size_t>, eqcomp>::size_type) = 8
initial min, max_load_factor = 0.320000, 0.800000
working min, max_load_factor = 0.320000, 0.800000
DUMP:  bucket_count() = 32, table.num_nonempty() = 0, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 25
RESIZING TO 64
AFTER RESIZE:  bucket_count() = 64, table.num_nonempty() = 25, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 51
RESIZING TO 128
AFTER RESIZE:  bucket_count() = 128, table.num_nonempty() = 51, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 102
RESIZING TO 256
AFTER RESIZE:  bucket_count() = 256, table.num_nonempty() = 102, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 204
RESIZING TO 512
AFTER RESIZE:  bucket_count() = 512, table.num_nonempty() = 204, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 409
RESIZING TO 1024
AFTER RESIZE:  bucket_count() = 1024, table.num_nonempty() = 409, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 819
RESIZING TO 2048
AFTER RESIZE:  bucket_count() = 2048, table.num_nonempty() = 819, delta = 1,
HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 1638
RESIZING TO 4096
AFTER RESIZE:  bucket_count() = 4096, table.num_nonempty() = 1638, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 3276
RESIZING TO 8192
AFTER RESIZE:  bucket_count() = 8192, table.num_nonempty() = 3276, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 6553
RESIZING TO 16384
AFTER RESIZE:  bucket_count() = 16384, table.num_nonempty() = 6553, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 13107
RESIZING TO 32768
AFTER RESIZE:  bucket_count() = 32768, table.num_nonempty() = 13107, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 26214
RESIZING TO 65536
AFTER RESIZE:  bucket_count() = 65536, table.num_nonempty() = 26214, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 52428
RESIZING TO 131072
AFTER RESIZE:  bucket_count() = 131072, table.num_nonempty() = 52428, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 104857
RESIZING TO 262144
AFTER RESIZE:  bucket_count() = 262144, table.num_nonempty() = 104857, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 209715
RESIZING TO 524288
AFTER RESIZE:  bucket_count() = 524288, table.num_nonempty() = 209715, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 419430
RESIZING TO 1048576
AFTER RESIZE:  bucket_count() = 1048576, table.num_nonempty() = 419430,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 838860
RESIZING TO 2097152
AFTER RESIZE:  bucket_count() = 2097152, table.num_nonempty() = 838860,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 1677721
RESIZING TO 4194304
AFTER RESIZE:  bucket_count() = 4194304, table.num_nonempty() = 1677721,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 3355443
RESIZING TO 8388608
AFTER RESIZE:  bucket_count() = 8388608, table.num_nonempty() = 3355443,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 6710886
RESIZING TO 16777216
AFTER RESIZE:  bucket_count() = 16777216, table.num_nonempty() = 6710886,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 13421773
RESIZING TO 33554432
AFTER RESIZE:  bucket_count() = 33554432, table.num_nonempty() = 13421773,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 26843546
RESIZING TO 67108864
AFTER RESIZE:  bucket_count() = 67108864, table.num_nonempty() = 26843546,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 53687092
RESIZING TO 134217728
AFTER RESIZE:  bucket_count() = 134217728, table.num_nonempty() = 53687092,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 107374184
set1.size() = 100000000, set1.bucket_count() = 134217728,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 134217728, table.num_nonempty() = 100000000, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 107374184
RESIZING TO 268435456
AFTER RESIZE:  bucket_count() = 268435456, table.num_nonempty() = 107374184,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 214748368
set1.size() = 200000000, set1.bucket_count() = 268435456,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 268435456, table.num_nonempty() = 200000000, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 214748368
RESIZING TO 536870912
AFTER RESIZE:  bucket_count() = 536870912, table.num_nonempty() = 214748368,
delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 429496736
set1.size() = 300000000, set1.bucket_count() = 536870912,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 536870912, table.num_nonempty() = 300000000, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 429496736
set1.size() = 400000000, set1.bucket_count() = 536870912,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 536870912, table.num_nonempty() = 400000000, delta =
1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 429496736
RESIZING TO 1073741824
AFTER RESIZE:  bucket_count() = 1073741824, table.num_nonempty() =
429496736, delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() =
858993472
set1.size() = 500000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 1073741824, table.num_nonempty() = 500000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 858993472
set1.size() = 600000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 1073741824, table.num_nonempty() = 600000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 858993472
set1.size() = 700000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 1073741824, table.num_nonempty() = 700000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 858993472
set1.size() = 800000000, set1.bucket_count() = 1073741824,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 1073741824, table.num_nonempty() = 800000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 858993472
RESIZING TO 2147483648
AFTER RESIZE:  bucket_count() = 2147483648, table.num_nonempty() =
858993472, delta = 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() =
14757395478869966848
set1.size() = 900000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 2147483648, table.num_nonempty() = 900000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 14757395478869966848
set1.size() = 1000000000, set1.bucket_count() = 2147483648,
set1.max_bucket_count() = 2305843009213693951
DUMP:  bucket_count() = 2147483648, table.num_nonempty() = 1000000000, delta
= 1, HT_MIN_BUCKETS = 4, settings.enlarge_threshold() = 14757395478869966848



  bool resize_delta(size_type delta) {
    bool dump = table.num_nonempty() % 100000000 == 0;
    bool did_resize = false;

    if (dump) {
      printf("DUMP:  bucket_count() = %lu, table.num_nonempty() = %lu, delta
= %lu, HT_MIN_BUCKETS = %lu, settings.enlarge_threshold() = %lu\
\n",
             bucket_count(), table.num_nonempty(), delta, HT_MIN_BUCKETS,
settings.enlarge_threshold());
    }
    if ( bucket_count() >= HT_MIN_BUCKETS &&
         (table.num_nonempty() + delta) <= settings.enlarge_threshold() ) {
      return did_resize;                       // we're ok as we are
    }

    // Sometimes, we need to resize just to get rid of all the
    // "deleted" buckets that are clogging up the hashtable.  So when
    // deciding whether to resize, count the deleted buckets (which
    // are currently taking up room).  But later, when we decide what
    // size to resize to, *don't* count deleted buckets, since they
    // get discarded during the resize.
    const size_type needed_size =
        settings.min_buckets(table.num_nonempty() + delta, 0);
    if ( needed_size <= bucket_count() )      // we have enough buckets
      return did_resize;

    size_type resize_to =
        settings.min_buckets(table.num_nonempty() - num_deleted + delta,
                             bucket_count());
    if (resize_to < needed_size &&    // may double resize_to
        resize_to < (STL_NAMESPACE::numeric_limits<size_type>::max)() / 2) {
      // This situation means that we have enough deleted elements,
      // that once we purge them, we won't actually have needed to
      // grow.  But we may want to grow anyway: if we just purge one
      // element, say, we'll have to grow anyway next time we
      // insert.  Might as well grow now, since we're already going
      // through the trouble of copying (in order to purge the
      // deleted elements).
      const size_type target =
          static_cast<size_type>(settings.shrink_size(resize_to*2));
      if (table.num_nonempty() - num_deleted + delta >= target) {
        // Good, we won't be below the shrink threshhold even if we double.
        resize_to *= 2;
      }
    }
    printf("RESIZING TO %lu\n", resize_to);

    sparse_hashtable tmp(MoveDontCopy, *this, resize_to);
    swap(tmp);                             // now we are tmp

    printf("AFTER RESIZE:  bucket_count() = %lu, table.num_nonempty() = %lu,
delta = %lu, HT_MIN_BUCKETS = %lu, settings.enlarge_threshold()\
 = %lu\n",
           bucket_count(), table.num_nonempty(), delta, HT_MIN_BUCKETS,
settings.enlarge_threshold());

    return true;
  }

Original comment by keidavi...@gmail.com on 22 Sep 2010 at 4:01

@GoogleCodeExporter
Copy link
Author

Bizarre.  I agree with you the very-large enlarge_threshhold would explain the 
problem you're seeing.  I can't reproduce it, though.  I wonder if this is a 
gcc bug.  I think gcc 4.4.0 had a lot of problems.

Here's one way to test this a bit more: in hashtable-common.h:enlarge_size(), 
add logging before the return that emits x, enlarge_factor_, and 
static_cast<size_type>(x * enlarge_factor_).  You may want to use cout, rather 
than printf, to do the logging, to minimize reporting bugs due to the wrong 
format code.  (Though size_type should be %lu.)

It looks to me like gcc 4.4.0 is casting the float back to a size_type 
improperly, but it's just a guess.

If you have another version of g++ handy, trying to compile your app with that, 
and seeing if it makes a difference, might also be informative.

Original comment by csilv...@gmail.com on 22 Sep 2010 at 5:35

@GoogleCodeExporter
Copy link
Author

I thought about the float conversion but it checked out okay.  Here's one
bug fixed.  I'll see how big the table gets now before something breaks.

 hashtable-common.h

 // Reset the enlarge and shrink thresholds
  void reset_thresholds(*int* num_buckets) {
    set_enlarge_threshold(enlarge_size(num_buckets));
    set_shrink_threshold(shrink_size(num_buckets));
    // whatever caused us to reset already considered
    set_consider_shrink(false);
  }

Original comment by keidavi...@gmail.com on 22 Sep 2010 at 8:22

@GoogleCodeExporter
Copy link
Author

Crikey! -- I did a grep for 'int' but forgot to look at this file.  This looks 
to me to totally be the cause.  num_buckets would be negative if treated as an 
int, which would cause very large numbers when eventually cast back to a 
size_type.  I'll fix this for the next release.

Original comment by csilv...@gmail.com on 22 Sep 2010 at 8:28

  • Changed state: Started

@GoogleCodeExporter
Copy link
Author

Just over 13.7G 8-byte insertions when I crashed the machine (128GiB
RAM)--it had reached its 80% fill and was trying to double.  Note that
17179869184*8 = 128GiB.  I should have put in a limit.  Anyway, it seems to
work now.

set1.size() = 13700000000, set1.bucket_count() = 17179869184,
set1.max_bucket_count() = 2305843009213693951

Original comment by keidavi...@gmail.com on 23 Sep 2010 at 7:22

@GoogleCodeExporter
Copy link
Author

Question:

What's the interaction between max_load_factor() and the optional
constructor argument expected_max_items_in_table?  Since you can't set
max_load_factor until after the sparse_hash_set has been constructed, does
one need to take into account the default max_load_factor when specifying
expected_max_items_in_table to keep bucket_count() from being too large.
then adjust max_load_factor?

The idea is that I want bucket_count()*([my_entry_size]) to be the size of
available memory with max_load_factor about 95%, thus avoiding all the
copying--there is no deletion.

I guess I could experiment and figure this out, but it seems like a good
thing to document.  The point of the sparse hash set, after all, is space
efficiency, i.e. being able to cram as many items into available memory as
possible.

Thanks.

Kei

Original comment by keidavi...@gmail.com on 23 Sep 2010 at 7:51

@GoogleCodeExporter
Copy link
Author

Great!  I'm hoping to put out a new release shortly.

Original comment by csilv...@gmail.com on 23 Sep 2010 at 8:57

@GoogleCodeExporter
Copy link
Author

We don't expect most people to adjust max_load_factor() -- especially for 
sparse_hash_*, where extra buckets cost very little (just a few bits each).  In 
fact, I've seen more common where people overprovision, with max_load_factor() 
very small like .2, to keep speed up.

So you're right: if you want to size the hashtable based on number of buckets, 
rather than number of items, you'll need to keep max_load_factor into account 
(though note, again, that the amount of memory used will be dominated by the 
number of items in the hashtable).

Anyway, resizing an empty hashtable is cheap, so there should be no problem 
creating a hashtable (with default size), adjusting max_load_factor, and then 
calling resize(a big number).

Original comment by csilv...@gmail.com on 23 Sep 2010 at 9:02

@GoogleCodeExporter
Copy link
Author

Thanks.

Kei

Original comment by keidavi...@gmail.com on 23 Sep 2010 at 9:40

@GoogleCodeExporter
Copy link
Author

This should be fixed in sparsehash 1.9, just released.

Original comment by csilv...@gmail.com on 24 Sep 2010 at 6:52

  • Changed state: Fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant