You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed a strange memory leak when trying to index texts from wikipedia. Even when in-memory cache size is small, python client process which creates records, occupies more and more memory. The minimal example which recreates this error and shows that everything looks fine to a memory profiler:
import zerodb
import random
import transaction
from zerodb.models import Model, Text
from pympler import tracker
all_chars = map(chr, range(ord('a'), ord('z') + 1))
get_word = lambda: "".join([random.choice(all_chars) for i in range(1, 8)])
words = [get_word() for i in range(10000)]
get_text = lambda: " ".join([random.choice(words) for i in range(50)])
memory_tracker = tracker.SummaryTracker()
class Doc(Model):
text = Text()
if __name__ == "__main__":
username = "root"
passphrase = "very insecure passphrase - never use it"
db = zerodb.DB(("localhost", 8001), username=username, password=passphrase)
for i in range(10000):
print i
memory_tracker.print_diff()
with transaction.manager:
for j in range(1000):
doc = Doc(text=get_text())
db.add(doc)
db.disconnect()
The text was updated successfully, but these errors were encountered:
Possibly something like this. Solving the problem, perhaps, requires making memory dumps with gdb and exploring what kind of data constitutes the leftovers in memory
I've noticed a strange memory leak when trying to index texts from wikipedia. Even when in-memory cache size is small, python client process which creates records, occupies more and more memory. The minimal example which recreates this error and shows that everything looks fine to a memory profiler:
The text was updated successfully, but these errors were encountered: