Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Empty lists while creating parents #15367

Closed
roed314 opened this issue Nov 7, 2013 · 74 comments
Closed

Empty lists while creating parents #15367

roed314 opened this issue Nov 7, 2013 · 74 comments

Comments

@roed314
Copy link
Contributor

roed314 commented Nov 7, 2013

This is crazy:

sage: import gc
sage: A = at_least
sage: pre = gc.get_objects()
sage: K = Qp(7,2)
sage: post = gc.get_objects()
sage: len(post) - len(pre)
747

Much of that is empty lists:

sage: M = []
sage: for a in post:
....:     for b in pre:
....:         if a is b:
....:             break
....:     else:
....:         M.append(a)
sage: from collections import defaultdict
sage: types = defaultdict(int)
sage: for a in K.M:
....:     types[type(a)] += 1
sage: print '\n'.join([str(a) + ' ' + str(types[a]) for a in types.keys() if types[a] > 1])
<type 'list'> 554
<type 'tuple'> 57
<class 'weakref.KeyedRef'> 28
<type 'weakref'> 22
<type 'dict'> 18
<type 'sage.structure.coerce_dict.MonoDictEraser'> 10
<type 'sage.structure.coerce_dict.MonoDict'> 10
<type 'cell'> 8
<type 'instancemethod'> 6
<type 'frame'> 6
<type 'sage.structure.coerce_dict.TripleDict'> 5
<type 'sage.structure.coerce_dict.TripleDictEraser'> 5
<type 'function'> 4
<type 'staticmethod'> 4
<class 'sage.structure.dynamic_class.DynamicMetaclass'> 4
<class 'sage.rings.homset.RingHomset_generic_with_category'> 2
<class '_ast.Name'> 2

sage: listlens = defaultdict(int)
sage: for a in M:
....:     if isinstance(a, list):
....:         listlens[len(a)] += 1
sage: sage: sorted(listlens.items())
[(0, 525), (1, 9), (2, 2), (3, 2), (23, 10), (53, 5), (116583, 1)]

I'm not sure where these 525 empty lists are created, but it seems a bit excessive.

It's not just p-adic fields:

sage: pre = gc.get_objects()
sage: K = QuadraticField(-11)
sage: post = gc.get_objects()
sage: len(post) - len(pre)
1152

CC: @nbruin @simon-king-jena @vbraun

Component: coercion

Author: Nils Bruin

Branch/Commit: u/nbruin/ticket/15367 @ 719cdec

Reviewer: Simon King

Issue created by migration from https://trac.sagemath.org/ticket/15367

@roed314 roed314 added this to the sage-6.1 milestone Nov 7, 2013
@nbruin
Copy link
Contributor

nbruin commented Nov 7, 2013

comment:1

These are the empty buckets of the various TripleDict and MonoDict data structures:

sage: import gc
sage: pre = gc.get_objects()
sage: K = Qp(7,2)
sage: post = gc.get_objects()
sage: len(post) - len(pre)
682
sage: pre_id=set( id(p) for p in pre )
sage: new=[p for p in post if id(p) not in pre_id and isinstance(p,list) and len(p) ==0]

OK, we have now all the new lists of length 1. Let's see who's referring to them

sage: refs=[]
sage: for p in new: refs.extend(gc.get_referrers(p))

and count, among the referrers that are lists themselves, what the lengths are.

sage: from collections import Counter
sage: Counter(len(l) for l in refs if isinstance(l,list))
Counter({525: 525, 117431: 525, 53: 265, 23: 228})

The first one must be new itself and the one of length 117431 also seems to contain all 525 of them, so that must be post. The other ones have conspicuous lengths: 53 and 23:

sage: search_src("TripleDict\(23\)")
structure/parent.pyx:684:            self._action_hash = TripleDict(23)
structure/parent_old.pyx:87:        self._action_hash = TripleDict(23)
sage: search_src("MonoDict\(23\)")
structure/parent_gens.pyx:254:        self._has_coerce_map_from = MonoDict(23)
structure/parent.pyx:682:            self._coerce_from_hash = MonoDict(23)
structure/parent_old.pyx:85:        self._coerce_from_hash = MonoDict(23)
structure/parent_old.pyx:100:        self._has_coerce_map_from = MonoDict(23)
structure/parent_old.pyx:343:            self._has_coerce_map_from = MonoDict(23)
sage: search_src("MonoDict\(53\)")
structure/parent.pyx:686:            self._convert_from_hash = MonoDict(53)

so it looks like we have 265/53=5 (empty) self._convert_from_hash dictionaries and 228/23=9.9, so probably 10 (mostly) empty self._coerce_from_hash and self._action_hash dictionaries. Did we create 5 parents, perhaps?

If you think this is a real issue, we could open-code the hashtables for MonoDict and TripleDict in the same way that dict is done. That's quite a bit of work to program correctly, though! In the mean time, we could reduce the default sizes of the dictionaries a bit. Perhaps 53 and 23 is a bit on the large side for dictionaries that tend to not fill up so much?

@sagetrac-afiori
Copy link
Mannequin

sagetrac-afiori mannequin commented Nov 7, 2013

comment:2

The code:

memus= get_memory_usage();
for i in primes_first_n(1000):
    K = Qp(i,2);
    y=2-K(2);
    del K; del y;
    if i%100 == 1:
        gc.collect();
gc.collect();
print get_memory_usage(memus);

Allocates about 64M of ram. This suggests that we are using about 64K of ram per p-adic field. There is a similar phenomenon for quadratic fields (about 50K each).
For normal use, no one likely cares about one allocation of 64K, however, if you want to be using say 20000 p-adic fields (or perhaps 20000 things with parents), you might care about your 1.2G of ram.

The side issue that it is very easy to make p-adic fields or number fields un-garbage collectable (hopefully fixed forever someday soon #14711) makes it easy for a user to accidentally end up running out of ram.

@sagetrac-afiori
Copy link
Mannequin

sagetrac-afiori mannequin commented Nov 7, 2013

comment:3

There is a comment in:
parent.pyx:2196

if self._coerce_from_hash is None: # this is because parent.__init__() does not always get called
            self.init_coerce(False)

that suggests that we can't rely on init to actually be responsible for allocating anything. Consequently, throughout parent.pyx and parent_old.pyx most of the time we use any of
_action_hash, _has_coerce_map_from, _coerce_from_hash, _convert_from_hash
we check if they have been allocated yet. (There may be places where we don't check... which I suppose would be a bug).

This suggests the following fix:
Never allocate any of these, unless we intend to put something in them. We already need to check if these are unallocated, if they are... we can safely assume they are empty and not create them unless we are about to put something in them. We can also make the eventual start size for them smaller.

This may even speed up execution if these lists are typically empty, as looking to see if an empty list contains something will be slower than checking that the list doesn't even exist yet (especially as we already check if the list doesn't exist yet).

@nbruin
Copy link
Contributor

nbruin commented Nov 7, 2013

comment:4

Memory footprint:

sage: from sage.structure.coerce_dict import MonoDict
sage: memus= get_memory_usage();
sage: L=[MonoDict(23) for i in xrange(2*10^5)]
sage: print get_memory_usage(memus);
430.5078125
sage: memus= get_memory_usage();
sage: L2=[MonoDict(53) for i in xrange(2*10^5)]
sage: print get_memory_usage(memus);
908.0078125
sage: memus= get_memory_usage();
sage: L3=[sage.structure.coerce_dict.TripleDict(23) for i in xrange(2*10^5)]
sage: print get_memory_usage(memus);
430.5546875
sage: L4=[sage.structure.coerce_dict.TripleDict(11) for i in xrange(2*10^5)]
sage: print get_memory_usage(memus);
248.15234375

So we see that if we assume that 1MB=10^6B, then the memory footprint of empty TripleDict and MonoDict is about 112 to 93 to 86 bytes per initialized bucket (for empty dicts they should have about the same memory footprint. This would not be true for an open-coded hash table). So scaling back all these dictionaries to starting size 11 is probably quite reasonable: That allows 6 entries in them before resizing would be triggered.

It also makes a difference in initialization time:

sage: from sage.structure.coerce_dict import MonoDict, TripleDict
sage: %time _=[sage.structure.coerce_dict.TripleDict(11) for i in xrange(2*10^5)]
CPU times: user 1.30 s, sys: 0.07 s, total: 1.37 s
Wall time: 1.37 s
sage: %time _=[sage.structure.coerce_dict.TripleDict(23) for i in xrange(2*10^5)]
CPU times: user 2.14 s, sys: 0.11 s, total: 2.25 s
Wall time: 2.26 s
sage: %time _=[sage.structure.coerce_dict.TripleDict(53) for i in xrange(2*10^5)]
CPU times: user 3.33 s, sys: 0.16 s, total: 3.49 s
Wall time: 3.50 s

As you can see, allocating the empty lists is a pretty significant cost as well. For an open-coded hash table we would expect on a 64-bit machine: 16 bytes per entry in MonoDict and 32 bytes per entry in TripleDict (compared to 24 bytes for a python dict)

Furthermore, allocation time would be fairly independent of size: the store would simply be a slab of memory that can be initialized to null by a straight memory-zeroing call.

So I'm pretty sure we could make MonoDict and TripleDict quite a bit faster and more memory-efficient by basically just copying the open hash-table design for python's dict. We'd be working with table sizes that are a power of two, so we'd probably want to shift the addresses that we use as keys (given that the bottom 3 bits probably carry no information)

It's quite a bit of work to get right, so we should probably first establish that this is actually a bottleneck somewhere. I don't think the runtime characteristics of the implementations would be hugely different (but I cannot exclude something like a factor of 2 in runtime)

@simon-king-jena
Copy link
Member

comment:5

Replying to @nbruin:

So I'm pretty sure we could make MonoDict and TripleDict quite a bit faster and more memory-efficient by basically just copying the open hash-table design for python's dict. We'd be working with table sizes that are a power of two, so we'd probably want to shift the addresses that we use as keys (given that the bottom 3 bits probably carry no information)

It's quite a bit of work to get right, ...

Indeed. Thus, I think it would be a good idea to start with small steps, that can easily be done and may improve the situation.

Namely:

  • Let the default size of coercion caches be smaller. Why don't we make a test how big the caches become in doctests? If it turns out that most of the time we only store about 20 coercions, it does not make sense to create 53 buckets, but 23 would be more than enough.
  • Only create the dictionaries when we actually need them. I should add: Do not initialise all coercion caches of a parent at once.

@nbruin
Copy link
Contributor

nbruin commented Nov 7, 2013

comment:6

Replying to @simon-king-jena:

  • Let the default size of coercion caches be smaller. Why don't we make a test how big the caches become in doctests? If it turns out that most of the time we only store about 20 coercions, it does not make sense to create 53 buckets, but 23 would be more than enough.

Given that we should bound the fill ratio by about 60%-70%, that would not be enough. I expect, however, that many caches have very few entries, so going with 11 and reshaping when required is likely the best.

  • Only create the dictionaries when we actually need them. I should add: Do not initialise all coercion caches of a parent at once.

Hm, I wonder how the cost of that works out. Another option: make MonoDict etc. lazy in actually allocating the buckets (i.e., upon creation, just set self._buckets=None).

@simon-king-jena
Copy link
Member

comment:7

To test whether the dimension of the dicts is too big at initialisation, it makes sense to see how often a resize of MonoDict and TripleDict happens.

First findings:

  • During startup of Sage, there is no resize of a MonoDict. There is exactly one TripleDict that is resized, and it is actually resized three times.
  • The size of a MonoDict or TripleDict is always (by automatic resize) smaller than the number of buckets. The first resize of the TripleDict mentioned above happens when it has 53 buckets but 38 items (and after resize, it has thus 107 buckets for 38 items!)

I will now see what happens during doctests.

Question: Is it really a good idea to have 107 buckets for 38 items?

@simon-king-jena
Copy link
Member

comment:8

Replying to @nbruin:

Given that we should bound the fill ratio by about 60%-70%, that would not be enough.

What is the theory behind this choice?

  • Only create the dictionaries when we actually need them. I should add: Do not initialise all coercion caches of a parent at once.

Hm, I wonder how the cost of that works out. Another option: make MonoDict etc. lazy in actually allocating the buckets (i.e., upon creation, just set self._buckets=None).

Good idea! So, do not be lazy in coercion initialisation of Parent, but be lazy in the underlying data structure.

@nbruin
Copy link
Contributor

nbruin commented Nov 7, 2013

comment:9

Replying to @simon-king-jena:

Question: Is it really a good idea to have 107 buckets for 38 items?

The answer to this question is basically the same as "Is it really worth using a dictionary with lightning fast lookup and bad memory density rather than a linear search in a dense list?", except that we get to tune between the two.

Given cache locality, a linear search in a list of 38 items could be quite competitive!

Python dicts are even more agressive in resizing: for small dictionary sizes, they aim for the next power of two that is at least FOUR times the number of items (and trigger it at 2/3 occupation rate). But then, their price per entry is 24 bytes.

@simon-king-jena
Copy link
Member

comment:10

Next findings:

  • MonoDict does not get resized often, and during doctests of sage.schemes, the resize happens at a maximal size of 1118 items
  • TripleDict is resized more often. A size of 1277 is not uncommon, and maximally I see a resize happening with 5125 items.

Anyway, I am not sure if these figures are hinting something.

Do I understand correctly: Your suggestion is to start with a list (an actual initialised list) of buckets, but initially each bucket will be None rather than []. Hence, if the hash makes us choose a bucket that is None, then we immediately know that an item with this key hash does not exist yet, and we may either create a new bucket (if we want to add a new item) or immediately raise a KeyError (if we want to get an item).

@nbruin
Copy link
Contributor

nbruin commented Nov 7, 2013

comment:11

Replying to @simon-king-jena:

Do I understand correctly: Your suggestion is to start with a list (an actual initialised list) of buckets, but initially each bucket will be None rather than [].

Uh, no. I was more thinking of starting out with a bucket list of None. Your interpretation is also possible, but addresses a slightly different issue. It could be worth doing both, one of the two, or neither.

For your proposal: I don't know what the cost difference is between making an empty list and putting another reference to None in, i.e., the cost of

   cdef PyList buckets = PyList_New(nbuckets)
   for i in range(nbuckets):
        PyList_SetItem(buckets,i,None) #I think this works OK for refcounting

versus

   cdef PyList buckets = PyList_New(nbuckets)
   for i in range(nbuckets):
        PyList_SetItem(buckets,i,PyList_New(0))

If the difference is significant (and it may well be, because increffing doesn't involve memory allocation) then your interpretation may well have great benefits (and surely reduced memory footprint).

If you're feeling gutsy, you could just leave the entries in buckets initialized with NULL and save on all the incref/decref activity on None.

Come to think of it, I expect that your interpretation would have the biggest impact.

@nbruin
Copy link
Contributor

nbruin commented Nov 8, 2013

comment:12

Replying to @simon-king-jena:

Replying to @nbruin:

Given that we should bound the fill ratio by about 60%-70%, that would not be enough.

What is the theory behind this choice?

A reasonable start is wikipedia. We're currently using "separate chaining" which degrades fairly gracefully with increasing load factor, so perhaps the 60-70 load factor isn't as strict as the phase transition that open addressing undergoes around that point. You could experiment and tune a reasonable trade-off (the idea is that any time you don't hit what you're looking for on the first try, you've basically lost. Hash conflict resolution should rarely be necessary).

The other design limitation is that reshaping is expensive (O(size of dict)), so that should happen rarely. That means that if you increase the number of bins, you have to do so by a lot.

@nbruin
Copy link
Contributor

nbruin commented Nov 10, 2013

comment:13

After figuring out how python dicts work to improve WeakValueDictionary it wasn't so complicated to replicate the design for MonoDict and TripleDict either. Things of note:

  • I tried to be drop-in compatible. I haven't run doctests yet, but adapting should be straightforward
  • Performance is noticeably better than the previous implementation. Given how much overhead a per-line timeit has due to the python interpreter, I expect that this means the performance is considerably better (but in applications it probably doesn't matter that much, since our previous implementation was already pretty good)
  • regular deletions are done by placing the items to be deleted in a list (yes, that means allocating a list, but weakref callbacks etc. are allowed memory allocations. It would be virtually impossible to write python interpreter code that doesn't do that) and then throwing away the list. Advantage: we borrow python's trashcan that's already implemented for container types like lists. Of course there's the slight disadvantage of having some memory allocation activity, but that should be pretty fast (every python function call involves constructing tuples etc.)
  • storage should be much more compact, which should make code more cache-friendly as well. We don't have per-bucket overhead anymore: open addressing shares all space.
  • design size are irrelevant: the table is just a power of 2, but we use different hashing, so we should still get a good spread.

Some little problems:

  • rolling your own dynamically allocated store for python object links means that cython's generated GC traverse and clear code isn't sufficient. I wrote my own and hack them into the right slots.
  • we need to do some extensive testing to check no silly mistakes are left.

@nbruin
Copy link
Contributor

nbruin commented Nov 11, 2013

Attachment: coerce_dict.pyx.gz

MonoDict and TripleDict implemented by open addressing

@nbruin
Copy link
Contributor

nbruin commented Nov 11, 2013

MonoDict and TripleDict implemented by open addressing

@nbruin
Copy link
Contributor

nbruin commented Nov 11, 2013

comment:14

Attachment: coerce_dict.pxd.gz

With the attached files the doctests succeed (modulo some trivial failures).

@simon-king-jena
Copy link
Member

comment:15

With the old code, it would have been relatively easy to create a dictionary with a fixed key size (e.g. a dictionary such that all keys are 5-tuples of objects that are weak-refd if possible and compared by identity).

One question we might address: How difficult would this be with the new code you are suggesting? You are defining cdef struct mono_cell and cdef struct triple_cell. One might think of this definition

cdef struct tuple_cell:
    void** key_id_list
    PyObject** key_weakref_list
    PyObject* value

and then a KeytupleDict (or whatever we call it) would allocate enough memory for n pointers (n being the key size of the dictionary) both for key_id_list and key_weakref_list, for each item it stores. At first glance, your code looks like not difficult to generalise.

So, do you think we should have MonoDict and TripleDict for hard-coded key length and then (on a different ticket?) KeytupleDict for arbitrary fixed key length?

@nbruin
Copy link
Contributor

nbruin commented Nov 11, 2013

comment:16

Replying to @simon-king-jena:

So, do you think we should have MonoDict and TripleDict for hard-coded key length and then (on a different ticket?) KeytupleDict for arbitrary fixed key length?

I do not think so. With the previous code you had a prototype that performed well and offered this feature and we didn't put it in. That suggests to me we don't have a need for it.

In any case, If you would do this I don't think you'd want to add an extra layer of indirection in the key fields. Hence, you'd have a dynamic length structure:

cdef struct tuple_cell:
    void* key_ids[length]
    PyObject* key_weakrefs[length]
    PyObject* value

which of course you cannot define, but you need the sizeof, which should fairly safely be sizeofcell=(2*length+1)*sizeof(void*) Then you can have your table allocated as

table=<void**>PyMem_Malloc(newsize * sizeofcell)

and addressing entry i would be addressed as

cursor=<void**>&(table[(i&mask)*(2*length+1)])
key_id1=cursor[0]
key_id2=cursor[1]
key_wr1=<PyObject*> cursor[length]
key_wr2=<PyObject*> cursor[length+1]
value  =<PyObject*> cursor[2*length]

One problem with open addressing is that it has no cache locality at all. That's not a huge problem if the table is compact (and your hits are almost always immediate). Since the weakrefs and value are only used for validation, not searching, you could consider storing 'those' under an indirection.

I think we should only generalize the design if we know we have a need for it. I think first we should establish if this implementation is really an advantage, if the hash functions perform well, and if the resizing amount is reasonable.

One way would be to let lookup keep statistics:

  • number of iterations needed
  • mask,fill,used statistics
    and perhaps also for the dictionaries themselves:
  • calls to resize
  • calls to iteritems (especially with what mask:used:fill ratio that gets called)

See: http://code.activestate.com/recipes/578375/ for a proposal that makes a much more compact search table with compact iteration.

And as well: profiling data on how much time one typically in this code. If that's 2%, even doubling the speed would not have much effect. We may be hitting this code fairly often, but it's still pretty specialized: not like a dict of which several get queried for any method lookup.

@simon-king-jena
Copy link
Member

comment:17

I can confirm that all doctests pass, except some in sage.structure.coerce, which explicitly use a method that shows the distribution of values in the buckets and is thus not relevant.

Suggestion: I turn your two files into a git branch, remove the useless stuff in coerce.pyx, and then we can start reviewing (and testing performance).

@simon-king-jena
Copy link
Member

Branch: u/SimonKing/ticket/15367

@simon-king-jena
Copy link
Member

comment:19

Replying to @simon-king-jena:

Suggestion: I turn your two files into a git branch, remove the useless stuff in coerce.pyx, and then we can start reviewing (and testing performance).

Well, I simply did...

The first commit is the result of your drop-in replacement, the second commit is removing the get_stats() method, which is a debug method that is not functional any longer. I think the second commit is nothing but a review patch. Filling Author and Reviewer fields accordingly---but the review is not completed yet!


New commits:

[a2852e9](https://github.com/sagemath/sagetrac-mirror/commit/a2852e9)Remove unused debug method CoercionModel_cache_maps.get_stats()
[1724e0e](https://github.com/sagemath/sagetrac-mirror/commit/1724e0e)An improved implementation of MonoDict and TripleDict

@simon-king-jena
Copy link
Member

Reviewer: Simon King

@simon-king-jena
Copy link
Member

Commit: a2852e9

@simon-king-jena
Copy link
Member

Author: Nils Bruin

@simon-king-jena
Copy link
Member

comment:20

With the attached branch, I repeated the example from the ticket description:

sage: import gc
sage: pre = gc.get_objects()
sage: K = Qp(7,2)
sage: post = gc.get_objects()
sage: len(post) - len(pre)
211

And from comment:2 (starting a new session):

sage: M = []
sage: memus= get_memory_usage()
sage: for i in primes_first_n(1000):
....:     K = Qp(i,2)
....:     y=2-K(2)
....:     del K,y
....:     if i%100 == 1:
....:         gc.collect()
....: gc.collect()
....: print get_memory_usage(memus)
....: 
30
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
12.48828125

With sage-5.12.beta5+#13394, I get only "0" for gc.collect(), and in the end it get_memory_usage(memus) returns 30.2265625.

I have to leave now, but will tell something about memory footprint later.

@simon-king-jena
Copy link
Member

Work Issues: Improve timings

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Nov 18, 2013

Branch pushed to git repo; I updated commit sha1. New commits:

[9aeb4f5](https://github.com/sagemath/sagetrac-mirror/commit/9aeb4f5)trac #15367: fix whitespace

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Nov 18, 2013

Changed commit from b8d5a5a to 9aeb4f5

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Nov 19, 2013

Changed commit from 9aeb4f5 to 719cdec

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Nov 19, 2013

Branch pushed to git repo; I updated commit sha1. This was a forced push. Recent commits:

719cdec Remove unused debug method CoercionModel_cache_maps.get_stats()
0550ecb trac #15367: Improved implementation of TripleDict and MonoDict, based on open addressing. This implementation should be faster for all operations and have a smaller memory footprint.

@nbruin
Copy link
Contributor

nbruin commented Dec 9, 2013

comment:44

ping

I think this straightforward replacement is worth merging:

  • memory layout is more straightforward: If you want to trace a memory leak/reference chain, you immediately get to the MonoDict/TripleDict rather than having to dig through 2 levels of lists
  • memory footprint is much more compact
  • performance should be quite a bit better (although it seems that these dicts rarely get accessed in tight loops: optimizing such a loop usually removes the access to these dicts)
  • the way items get deleted makes sure that the Python trashcan is involved, which prevents recursion depth errors from occurring when deleting long chains of objects.

So, if someone can finish the review ... It's pretty straightforward code (apart from being rather performance sensitive).

@simon-king-jena
Copy link
Member

comment:45

Once more, my impression is that the git workflow helps us solve problems that we didn't have without git.

That's to say: sage --dev checkout --ticket=15367 only brought me to my local branch for this ticket. sage --dev pull resulted in

Merging the remote branch "u/nbruin/ticket/15367" into the local branch "ticket/15367".
Automatic merge failed, there are conflicting commits.

Auto-merging src/sage/structure/coerce_dict.pyx
CONFLICT (content): Merge conflict in src/sage/structure/coerce_dict.pyx

Please edit the affected files to resolve the conflicts. When you are finished, your resolution will be commited.
Finished? [ok/Abort] A

So, how can I get your code?

@vbraun
Copy link
Member

vbraun commented Dec 9, 2013

comment:46

Nils rewrote history with the forced pushs (bad), so some of the code in Simon's branch conflicts as it is on a different history.

Replying to @simon-king-jena:

Once more, my impression is that the git workflow helps us solve problems that we didn't have without git

Noticing conflicts is not a problem when you manually copy patches, correct. However, not noticing conflicts is a problem ;-)

So, how can I get your code?

If you haven't made any commits that you want to keep: delete your current branch (git branch -D my_branch) and checkout a fresh copy. If you have made commits, you have to rebase (git rebase or git cherrypick, for example) them on top of Nils' rewritten history.

@simon-king-jena
Copy link
Member

comment:47

Replying to @vbraun:

Replying to @simon-king-jena:

Once more, my impression is that the git workflow helps us solve problems that we didn't have without git

Noticing conflicts is not a problem when you manually copy patches, correct. However, not noticing conflicts is a problem ;-)

As far as I know, my local repository contains nothing that depends on this ticket. So, how can there be a conflict? I am not talking about conflicts that are just artefacts of how git works.

So, how can I get your code?

If you haven't made any commits that you want to keep: delete your current branch (git branch -D my_branch) and checkout a fresh copy. If you have made commits, you have to rebase (git rebase or git cherrypick, for example) them on top of Nils' rewritten history.

If I have a local branch associated with this ticket, and if I pull from this ticket and there are merge conflicts with my local branch, I'd expect that my old local branch (ticket/15367) be preserved and a new local branch (say, ticket/15367-1) be created that is equal to the branch forced-pushed by Nils. This should be automatically done by either the dev script or by git. And then, I can still decide whether I want to delete my old branch or merge the new and the old branch, or I can explain to Nils why I don't like his new branch and switch back to the old one.

In contrast, you say that in the new workflow I have to decide whether or not to delete my old branch, before I pull from Nils' changes. Why? This approach seems plain wrong to me.

I guess this is a theoretical discussion that belongs to the sage-git list. Since I don't see stuff depend on this ticket (yet), I did as you suggested.

@nbruin
Copy link
Contributor

nbruin commented Dec 10, 2013

comment:48

It seems that sage --dev has a concept of linking a branch with a trac ticket, which is convenient. In your case, you already had a branch linked with this ticket, so naturally it tried to merge my branch with your local one and due to the rebase there really WAS a conflict. Probably there is a way to "unlink" your local branch from the ticket, so that sage --dev checkout --ticket=15367 gets you a fresh branch. Similarly I'd hope there is a way to locally "reassociate" a branch with a ticket.

Sorry about the rebase, but in this case it seemed warranted because

  • nothing else depended on the branch
  • There was a big back-and-forth on removing and reintroducing the Eraser classes, which would have been silly to archive in the sage history (tripling the modification size, or something like that)

Keeping the trivial edits to CoercionModel_cache_maps.get_stats() attributed to Simon was an interesting exercise, though.

@vbraun
Copy link
Member

vbraun commented Dec 10, 2013

comment:49

Replying to @simon-king-jena:

If I have a local branch associated with this ticket, and if I pull from this ticket and there are merge conflicts with my local branch, I'd expect that my old local branch (ticket/15367) be preserved and a new local branch (say, ticket/15367-1) be created that is equal to the branch forced-pushed by Nils.

The commits from all of those branches are of course preserved in git. If you like another local branch to give a name to the different histories then that is fine, but don't force your naming convention on others. To uniquely reference any of the past branch heads you only need the commit SHA1, whose historical values is conveniently listed in the ticket comments.

@nbruin
Copy link
Contributor

nbruin commented Dec 10, 2013

comment:50

Replying to @simon-king-jena:

If I have a local branch associated with this ticket, and if I pull from this ticket and there are merge conflicts with my local branch, I'd expect that my old local branch (ticket/15367) be preserved and a new local branch (say, ticket/15367-1) be created that is equal to the branch forced-pushed by Nils.

I think you can "unlink" by doing something along the lines of ./sage --dev set-remote None no/remote/branch. Because your local branch is now associated with a different remote branch (is this a git feature or is "associated remote branch" something that sage-dev invents?) I would expect that checking out the ticket would now cause a new, fresh branch to be created.

@simon-king-jena
Copy link
Member

comment:51

Replying to @nbruin:

It seems that sage --dev has a concept of linking a branch with a trac ticket, which is convenient. In your case, you already had a branch linked with this ticket, so naturally it tried to merge my branch with your local one and due to the rebase there really WAS a conflict.

AFAIK, git knows that this was a forced push. So, the dev script should at least offer a "forced pull".

Anyway, this ticket is not about annoyances and hold-ups caused by the switch to the new workflow.

@simon-king-jena
Copy link
Member

comment:52

To start reviewing, I merged master (freshly pulled from trac) into the branch, which succeeded.

@simon-king-jena
Copy link
Member

comment:53

Second step of the review: I checked that the patchbot is right and all tests pass (after merging master).

@simon-king-jena
Copy link
Member

comment:54

Third step: Re-test timings. I do "branch versus sage-5.13.b4".

%cpaste
cython("""
def test(T, list L1, list L2, list L3, v):
    for i in L1:
        for j in L2:
            for k in L3:
                T[i,j,k] = v
def test2(T, list L1, list L2, list L3):
    for i in L1:
        for j in L2:
            for k in L3:
                v = T[i,j,k]
def test3(T, list L1, list L2, list L3):
    for i in L1:
        for j in L2:
            for k in L3:
                (i,j,k) in T
def test4(T, list L1, list L2, list L3):
    for i in L1:
        for j in L2:
            for k in L3:
                (i,j+1,k) in T
def test5(T, list L1, list L2, list L3):
    for i in L1:
        for j in L2:
            for k in L3:
                del T[i,j,k]
""")

In the branch:

sage: T = sage.structure.coerce_dict.TripleDict(11)
sage: L1 = srange(100)
sage: %time test(T,L1,L1,L1,ZZ)
CPU times: user 9.70 s, sys: 0.05 s, total: 9.75 s
Wall time: 9.77 s
sage: %time test2(T,L1,L1,L1)
CPU times: user 0.64 s, sys: 0.01 s, total: 0.65 s
Wall time: 0.66 s
sage: %time test3(T,L1,L1,L1)
CPU times: user 0.61 s, sys: 0.00 s, total: 0.61 s
Wall time: 0.61 s
sage: %time test4(T,L1,L1,L1)
CPU times: user 0.23 s, sys: 0.00 s, total: 0.23 s
Wall time: 0.23 s
sage: %time test5(T,L1,L1,L1)
CPU times: user 0.84 s, sys: 0.01 s, total: 0.85 s
Wall time: 0.86 s
sage: get_memory_usage()
231.640625

In sage-5.13.b4 (plus #14912, but this doesn't touch coerce dict):

sage: T = sage.structure.coerce_dict.TripleDict(11)
sage: L1 = srange(100)
sage: %time test(T,L1,L1,L1,ZZ)
CPU times: user 15.94 s, sys: 0.11 s, total: 16.05 s
Wall time: 16.08 s
sage: %time test2(T,L1,L1,L1)
CPU times: user 1.86 s, sys: 0.00 s, total: 1.86 s
Wall time: 1.87 s
sage: %time test3(T,L1,L1,L1)
CPU times: user 1.86 s, sys: 0.00 s, total: 1.86 s
Wall time: 1.86 s
sage: %time test4(T,L1,L1,L1)
CPU times: user 1.79 s, sys: 0.02 s, total: 1.81 s
Wall time: 1.82 s
sage: %time test5(T,L1,L1,L1)
CPU times: user 1.72 s, sys: 0.03 s, total: 1.75 s
Wall time: 1.75 s
sage: get_memory_usage()
318.078125

So, everything became faster, and the memory consumption is a bit less (but the latter could very well be because of other tickets).

The effect on startuptime is probably not significant, but I get (one run only) 1586.843ms with the branch versus 1600.406ms with 5.13.b4+#14912.

So, the winner is: THIS BRANCH.

Last step of the review: Read the new code.

@simon-king-jena
Copy link
Member

comment:55

In MonoDict.set, there is

        cursor = self.lookup(<PyObject*><void*>k)
        if cursor.key_id == NULL or cursor.key_id == dummy:
            ...

without saying that cursor is cdef mono_cell*. Similar things happen in MonoDict.iteritems, Monodict_traverse and MonoDict_clear, and also in TripleDict.set, TripleDict.iteritems, TripleDict_traverse and TripleDict_clear.

First question: Why does it not crash? Second question, even if there is a good answer to the first: Shouldn't it be cdef mono_cell* cursor resp. cdef triple_cell* cursor in order to improve performance?

@nbruin
Copy link
Contributor

nbruin commented Dec 17, 2013

comment:56

Replying to @simon-king-jena:

First question: Why does it not crash?

Cython has grown type inference.

Second question, even if there is a good answer to the first: Shouldn't it be cdef mono_cell* cursor resp. cdef triple_cell* cursor in order to improve performance?

No, because cython infers the correct type by itself.

I'm not commenting on whether it's good style to rely on said type inference, but it is very convenient when you're programming.

@simon-king-jena
Copy link
Member

comment:57

Replying to @nbruin:

Replying to @simon-king-jena:

First question: Why does it not crash?

Cython has grown type inference.

Since when?

Amazing.

Anyway, back to needs review then...

@nbruin
Copy link
Contributor

nbruin commented Dec 17, 2013

comment:58

Replying to @simon-king-jena:

Since when?

According to https://github.com/cython/cython/wiki/ReleaseNotes-0.13 since August 25, 2010 :-).

@simon-king-jena
Copy link
Member

comment:59

Replying to @nbruin:

Replying to @simon-king-jena:

Since when?

According to https://github.com/cython/cython/wiki/ReleaseNotes-0.13 since August 25, 2010 :-).

So far I have always tried to tell Cython which type to expect.

Anyway.

The code looks good to me, tests pass, the patchbot has nothing to complain, and as we have seen there is an improved performance. Positive review!

@jdemeyer
Copy link

comment:61

While looking at this code, I noticed that the size argument is never used anywhere (except to interpret it as data):

def __init__(self, size=11, data=None, threshold=0.7, weak_values=False)

Is this intentional? If yes, can we deprecate size? If no, what should we do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants