Comparing changes
Open a pull request
base repository: git/git
base: 853ec9aa9be1b467986dd86ffc36c1c8580a6789
head repository: git/git
compare: bf972896d7186f0d29f7807b600f6fdb2837de18
- 5 commits
- 3 files changed
- 1 contributor
Commits on Oct 8, 2021
-
t1006: clean up broken objects
A few of the tests create intentionally broken objects with broken types. Let's clean them up after we're done with them, so that later tests don't get confused (we hadn't noticed because this only affects tests which use --batch-all-objects, but I'm about to add more). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
-
cat-file: mention --unordered along with --batch-all-objects
The note on ordering for --batch-all-objects was written when that was the only possible ordering. These days we have --unordered, too, so let's point to it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
-
cat-file: disable refs/replace with --batch-all-objects
When we're enumerating all objects in the object database, it doesn't make sense to respect refs/replace. The point of this option is to enumerate all of the objects in the database at a low level. By definition we'd already show the replacement object's contents (under its real oid), and showing those contents under another oid is almost certainly working against what the user is trying to do. Note that you could make the same argument for something like: git show-index <foo.idx | awk '{print $2}' | git cat-file --batch but there we can't know in cat-file exactly what the user intended, because we don't know the source of the input. They could be trying to do low-level debugging, or they could be doing something more high-level (e.g., imagine a porcelain built around cat-file for its object accesses). So in those cases, we'll have to rely on the user specifying "git --no-replace-objects" to tell us what to do. One _could_ make an argument that "cat-file --batch" is sufficiently low-level plumbing that it should not respect replace-objects at all (and the caller should do any replacement if they want it). But we have been doing so for some time. The history is a little tangled: - looking back as far as v1.6.6, we would not respect replace refs for --batch-check, but would for --batch (because the former used sha1_object_info(), and the replace mechanism only affected actual object reads) - this discrepancy was made even weirder by 98e2092 (cat-file: teach --batch to stream blob objects, 2013-07-10), where we always output the header using the --batch-check code, and then printed the object separately. This could lead to "cat-file --batch" dying (when it notices the size or type changed for a non-blob object) or even producing bogus output (in streaming mode, we didn't notice that we wrote the wrong number of bytes). - that persisted until 1f7117e (sha1_file: perform object replacement in sha1_object_info_extended(), 2013-12-11), which then respected replace refs for both forms. So it has worked reliably this way for over 7 years, and we should make sure it continues to do so. That could also be an argument that --batch-all-objects should not change behavior (which this patch is doing), but I really consider the current behavior to be an unintended bug. It's a side effect of how the code is implemented (feeding the oids back into oid_object_info() rather than looking at what we found while reading the loose and packed object storage). The implementation is straight-forward: we just disable the global read_replace_refs flag when we're in --batch-all-objects mode. It would perhaps be a little cleaner to change the flag we pass to oid_object_info_extended(), but that's not enough. We also read objects via read_object_file() and stream_blob_to_fd(). The former could switch to its _extended() form, but the streaming code has no mechanism for disabling replace refs. Setting the global flag works, and as a bonus, it's impossible to have any "oops, we're sometimes replacing the object and sometimes not" bugs in the output (like the ones caused by 98e2092 above). The tests here cover the regular-input and --batch-all-objects cases, for both --batch-check and --batch. There is a test in t6050 that covers the regular-input case with --batch already, but this new one goes much further in actually verifying the output (plus covering --batch-check explicitly). This is perhaps a little overkill and the tests would be simpler just covering --batch-check, but I wanted to make sure we're checking that --batch output is consistent between the header and the content. The global-flag technique used here makes that easy to get right, but this is future-proofing us against regressions. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> -
cat-file: split ordered/unordered batch-all-objects callbacks
When we originally added --batch-all-objects, it stuffed everything into an oid_array(), and then iterated over that array with a callback to write the actual output. When we later added --unordered, that code path writes immediately as we discover each object, but just calls the same batch_object_cb() as our entry point to the writing code. That callback has a narrow interface; it only receives the oid, but we know much more about each object in the unordered write (which we'll make use of in the next patch). So let's just call batch_object_write() directly. The callback wasn't saving us much effort. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
-
cat-file: use packed_object_info() for --batch-all-objects
When "cat-file --batch-all-objects" iterates over each object, it knows where to find each one. But when we look up details of the object, we don't use that information at all. This patch teaches it to use the pack/offset pair when we're iterating over objects in a pack. This yields a measurable speed improvement (timings on a fully packed clone of linux.git): Benchmark #1: ./git.old cat-file --batch-all-objects --unordered --batch-check="%(objecttype) %(objectname)" Time (mean ± σ): 8.128 s ± 0.118 s [User: 7.968 s, System: 0.156 s] Range (min … max): 8.007 s … 8.301 s 10 runs Benchmark #2: ./git.new cat-file --batch-all-objects --unordered --batch-check="%(objecttype) %(objectname)" Time (mean ± σ): 4.294 s ± 0.064 s [User: 4.167 s, System: 0.125 s] Range (min … max): 4.227 s … 4.457 s 10 runs Summary './git.new cat-file --batch-all-objects --unordered --batch-check="%(objecttype) %(objectname)"' ran 1.89 ± 0.04 times faster than './git.old cat-file --batch-all-objects --unordered --batch-check="%(objecttype) %(objectname)" The implementation is pretty simple: we just call packed_object_info() instead of oid_object_info_extended() when we can. Most of the changes are just plumbing the pack/offset pair through the callstack. There is one subtlety: replace lookups are not handled by packed_object_info(). But since those are disabled for --batch-all-objects, and since we'll only have pack info when that option is in effect, we don't have to worry about that. There are a few limitations to this optimization which we could address with further work: - I didn't bother recording when we found an object loose. Technically this could save us doing a fruitless lookup in the pack index. But opening and mmap-ing a loose object is so expensive in the first place that this doesn't matter much. And if your repository is large enough to care about per-object performance, most objects are going to be packed anyway. - This works only in --unordered mode. For the sorted mode, we'd have to record the pack/offset pair as part of our oid-collection. That's more code, plus at least 16 extra bytes of heap per object. It would probably still be a net win in runtime, but we'd need to measure. - For --batch, this still helps us with getting the object metadata, but we still do a from-scratch lookup for the object contents. This probably doesn't matter that much, because the lookup cost will be much smaller relative to the cost of actually unpacking and printing the objects. For small objects, we could probably swap out read_object_file() for using packed_object_info() with a "object_info.contentp" to get the contents. But we'd still need to deal with streaming for larger objects. A better path forward here is to teach the initial oid_object_info_extended() / packed_object_info() calls to retrieve the contents of smaller objects while they are already being accessed. That would save the extra lookup entirely. But it's a non-trivial feature to add to the object_info code, so I left it for now. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff 853ec9aa9be1b467986dd86ffc36c1c8580a6789...bf972896d7186f0d29f7807b600f6fdb2837de18