New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintenance builtin, allowing 'gc --auto' customization #671
Maintenance builtin, allowing 'gc --auto' customization #671
Conversation
d3eef56
to
d753378
Compare
220bc90
to
3f4a687
Compare
There is an issue in commit f9dd01721396142ec57799f383cdac3480357e14: |
948393d
to
3120604
Compare
46940e2
to
f7fdf72
Compare
/submit |
Submitted as pull.671.git.1594131695.gitgitgadget@gmail.com |
This branch is now known as |
This patch series was integrated into seen via git@cbc3414. |
This patch series was integrated into seen via git@54aaee1. |
This patch series was integrated into seen via git@8b74962. |
This patch series was integrated into seen via git@849ae20. |
On the Git mailing list, Emily Shaffer wrote (reply to this):
|
This patch series was integrated into seen via git@8ba943f. |
builtin/gc.c
Outdated
@@ -116,50 +116,51 @@ static void process_log_file_on_signal(int signo) | |||
raise(signo); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Jonathan Tan wrote (reply to this):
> From: Derrick Stolee <dstolee@microsoft.com>
>
> The report_last_gc_error() method use git_pathdup() which implicitly
> uses the_repository. Replace this with strbuf_repo_path() to get a
> path buffer we control that uses a given repository pointer.
>
> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Regarding the first 4 patches, it would be great if there was a test
like the one in test_repository.c (together with a code coverage report,
perhaps) that verifies that all code paths do not use the_repository.
But set aside that test for now - I don't think gc.c fully supports
arbitrary repositories. In particular, when running a subprocess, it
inherits the environment from the current process. I see that future
patches try to resolve this by passing "-C", but that does not work if
environment variables like GIT_DIR are set (because the environment
variables override the "-C"). Perhaps we need a function that runs a
subprocess in a specific repository. I ran into the same problem when
attempting to make fetch-pack (which runs index-pack as a subprocess)
support arbitrary repositories, but I haven't looked deeply into
resolving this yet (and I haven't looked at that problem in a while).
Having said that, I'm fine with these patches being in the set - the
only negative is that perhaps a reader would be misled into thinking
that GC supports arbitrary repositories.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Derrick Stolee wrote (reply to this):
On 7/8/2020 10:22 PM, Jonathan Tan wrote:
>> From: Derrick Stolee <dstolee@microsoft.com>
>>
>> The report_last_gc_error() method use git_pathdup() which implicitly
>> uses the_repository. Replace this with strbuf_repo_path() to get a
>> path buffer we control that uses a given repository pointer.
>>
>> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
>
> Regarding the first 4 patches, it would be great if there was a test
> like the one in test_repository.c (together with a code coverage report,
> perhaps) that verifies that all code paths do not use the_repository.
>
> But set aside that test for now - I don't think gc.c fully supports
> arbitrary repositories. In particular, when running a subprocess, it
> inherits the environment from the current process. I see that future
> patches try to resolve this by passing "-C", but that does not work if
> environment variables like GIT_DIR are set (because the environment
> variables override the "-C"). Perhaps we need a function that runs a
> subprocess in a specific repository. I ran into the same problem when
> attempting to make fetch-pack (which runs index-pack as a subprocess)
> support arbitrary repositories, but I haven't looked deeply into
> resolving this yet (and I haven't looked at that problem in a while).
>
> Having said that, I'm fine with these patches being in the set - the
> only negative is that perhaps a reader would be misled into thinking
> that GC supports arbitrary repositories.
I agree. I hope that I do not give the impression that GC is now
safe for arbitrary repositories. I only thought that this was prudent
to do before I start taking new dependencies on the code.
It' probably time for someone to do a round of the_repository cleanups
again, and perhaps the RC window is a good time to think about that
(for submission after the release).
Thanks,
-Stolee
builtin/gc.c
Outdated
@@ -116,50 +116,51 @@ static void process_log_file_on_signal(int signo) | |||
raise(signo); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Jonathan Tan wrote (reply to this):
> This list is also inserted into a hashmap. This allows command-line
> arguments to quickly find the tasks by name, not sensitive to case. To
> ensure this list and hashmap work well together, the list only contains
> pointers to the struct information. This will allow a sort on the list
> while preserving the hashmap data.
I think having the hashmap is unnecessarily complicated in this case -
with the small number of tasks, a list would be fine. But I don't feel
strongly about this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Derrick Stolee wrote (reply to this):
On 7/8/2020 10:25 PM, Jonathan Tan wrote:
>> This list is also inserted into a hashmap. This allows command-line
>> arguments to quickly find the tasks by name, not sensitive to case. To
>> ensure this list and hashmap work well together, the list only contains
>> pointers to the struct information. This will allow a sort on the list
>> while preserving the hashmap data.
>
> I think having the hashmap is unnecessarily complicated in this case -
> with the small number of tasks, a list would be fine. But I don't feel
> strongly about this.
You're probably right that iterating through a list with (hopefully)
at most a dozen entries is fast enough that a hashmap is overkill here.
Now is the real test: can I change this patch in v2 without needing
to mess with any of the others? The intention here was to make adding
tasks as simple as possible, so we shall see. :D
Thanks,
-Stolee
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Junio C Hamano wrote (reply to this):
Derrick Stolee <stolee@gmail.com> writes:
> On 7/8/2020 10:25 PM, Jonathan Tan wrote:
>>> This list is also inserted into a hashmap. This allows command-line
>>> arguments to quickly find the tasks by name, not sensitive to case. To
>>> ensure this list and hashmap work well together, the list only contains
>>> pointers to the struct information. This will allow a sort on the list
>>> while preserving the hashmap data.
>>
>> I think having the hashmap is unnecessarily complicated in this case -
>> with the small number of tasks, a list would be fine. But I don't feel
>> strongly about this.
>
> You're probably right that iterating through a list with (hopefully)
> at most a dozen entries is fast enough that a hashmap is overkill here.
>
> Now is the real test: can I change this patch in v2 without needing
> to mess with any of the others? The intention here was to make adding
> tasks as simple as possible, so we shall see. :D
Adding a new element to a list would be simple no matter how the
list is represented. But I think the real question is what access
pattern we expect. Do we need to look up by name a single one or
selected few? Do we need the iteration/enumeration be stable? etc.
@@ -0,0 +1,78 @@ | |||
git-maintenance(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Jonathan Tan wrote (reply to this):
> +static int run_write_commit_graph(struct repository *r)
> +{
> + int result;
> + struct argv_array cmd = ARGV_ARRAY_INIT;
> +
> + argv_array_pushl(&cmd, "-C", r->worktree,
> + "commit-graph", "write",
> + "--split", "--reachable",
> + NULL);
As mentioned in my reply to an earlier patch (sent a few minutes ago),
this won't work if there are environment variables like GIT_DIR present.
Besides that, the overall design looks reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Derrick Stolee wrote (reply to this):
On 7/8/2020 10:29 PM, Jonathan Tan wrote:
>> +static int run_write_commit_graph(struct repository *r)
>> +{
>> + int result;
>> + struct argv_array cmd = ARGV_ARRAY_INIT;
>> +
>> + argv_array_pushl(&cmd, "-C", r->worktree,
>> + "commit-graph", "write",
>> + "--split", "--reachable",
>> + NULL);
>
> As mentioned in my reply to an earlier patch (sent a few minutes ago),
> this won't work if there are environment variables like GIT_DIR present.
Do we not pass GIT_DIR to the subcommand? Or does using "-C" override
the GIT_DIR?
Thanks,
-Stolee
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Jeff King wrote (reply to this):
On Thu, Jul 09, 2020 at 07:14:41AM -0400, Derrick Stolee wrote:
> On 7/8/2020 10:29 PM, Jonathan Tan wrote:
> >> +static int run_write_commit_graph(struct repository *r)
> >> +{
> >> + int result;
> >> + struct argv_array cmd = ARGV_ARRAY_INIT;
> >> +
> >> + argv_array_pushl(&cmd, "-C", r->worktree,
> >> + "commit-graph", "write",
> >> + "--split", "--reachable",
> >> + NULL);
> >
> > As mentioned in my reply to an earlier patch (sent a few minutes ago),
> > this won't work if there are environment variables like GIT_DIR present.
>
> Do we not pass GIT_DIR to the subcommand? Or does using "-C" override
> the GIT_DIR?
We do pass GIT_DIR to the subcommand, and "-C" does not override it. I
think this code would work as long as "r" is the_repository, which it
would be in the current code. But then the "-C" would be doing nothing
useful (it might change to the top of the worktree if we weren't there
for some reason, but I don't think "commit-graph write" would care
either way).
But if "r" is some other repository, "commit-graph" would continue to
operate in the parent process repository because of the inherited
GIT_DIR. Using "--git-dir" would solve that, but as a general practice,
if you're spawning a sub-process that might be in another repository,
you should clear any repo-specific environment variables. The list is in
local_repo_env, which you can feed to the "env" or "env_array" parameter
of a child_process (see the use in connect.c for an example).
Even in the current scheme where "r" is always the_repository, I suspect
this might still be buggy. If we're in a bare repository, presumably
r->worktree would be NULL.
-Peff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Derrick Stolee wrote (reply to this):
On 7/9/2020 6:52 PM, Jeff King wrote:
> On Thu, Jul 09, 2020 at 07:14:41AM -0400, Derrick Stolee wrote:
>
>> On 7/8/2020 10:29 PM, Jonathan Tan wrote:
>>>> +static int run_write_commit_graph(struct repository *r)
>>>> +{
>>>> + int result;
>>>> + struct argv_array cmd = ARGV_ARRAY_INIT;
>>>> +
>>>> + argv_array_pushl(&cmd, "-C", r->worktree,
>>>> + "commit-graph", "write",
>>>> + "--split", "--reachable",
>>>> + NULL);
>>>
>>> As mentioned in my reply to an earlier patch (sent a few minutes ago),
>>> this won't work if there are environment variables like GIT_DIR present.
>>
>> Do we not pass GIT_DIR to the subcommand? Or does using "-C" override
>> the GIT_DIR?
>
> We do pass GIT_DIR to the subcommand, and "-C" does not override it. I
> think this code would work as long as "r" is the_repository, which it
> would be in the current code. But then the "-C" would be doing nothing
> useful (it might change to the top of the worktree if we weren't there
> for some reason, but I don't think "commit-graph write" would care
> either way).
>
> But if "r" is some other repository, "commit-graph" would continue to
> operate in the parent process repository because of the inherited
> GIT_DIR. Using "--git-dir" would solve that, but as a general practice,
> if you're spawning a sub-process that might be in another repository,
> you should clear any repo-specific environment variables. The list is in
> local_repo_env, which you can feed to the "env" or "env_array" parameter
> of a child_process (see the use in connect.c for an example).
>
> Even in the current scheme where "r" is always the_repository, I suspect
> this might still be buggy. If we're in a bare repository, presumably
> r->worktree would be NULL.
Ah. I'll investigate this more and work to create a way to
run a subcommand in a given repository. Your pointers will
help a lot.
Thanks,
-Stolee
@@ -0,0 +1,94 @@ | |||
git-maintenance(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Jonathan Tan wrote (reply to this):
> 3. By adding a new refspec "+refs/heads/*:refs/hidden/<remote>/*"
> we can ensure that we actually load the new values somewhere in
> our refspace while not updating refs/heads or refs/remotes. By
> storing these refs here, the commit-graph job will update the
> commit-graph with the commits from these hidden refs.
>
> 4. --prune will delete the refs/hidden/<remote> refs that no
> longer appear on the remote.
Having a ref path where Git can place commit IDs that it needs persisted
is useful, not only in this case but in other cases (e.g. when fetching
a submodule commit by hash, we might not have a ref name for that commit
but want to persist it anyway), so I look forward to having something
like this.
The name of this special ref path and its specific nature could be
discussed further, but maybe it is sufficient for now to just say that
the refs under this special ref path are controlled by Git, and their
layout is experimental and subject to change (e.g. future versions of
Git could just erase the entire path and rewrite the refs its own way).
This patch series was integrated into seen via git@2d70fc6. |
On the Git mailing list, Derrick Stolee wrote (reply to this):
|
f7fdf72
to
ab74e53
Compare
On the Git mailing list, Derrick Stolee wrote (reply to this):
|
This patch series was integrated into seen via git@bf7a618. |
This patch series was integrated into seen via git@e69a107. |
This patch series was integrated into seen via git@e8d9043. |
This patch series was integrated into seen via git@583391a. |
This patch series was integrated into seen via git@ab49a6f. |
This patch series was integrated into seen via git@a5744db. |
The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Maintenance activities are commonly used as steps in larger scripts. Providing a '--quiet' option allows those scripts to be less noisy when run on a terminal window. Turn this mode on by default when stderr is not a terminal. Pipe the option to the 'git gc' child process. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The run_auto_gc() method is used in several places to trigger a check for repo maintenance after some Git commands, such as 'git commit' or 'git fetch'. To allow for extra customization of this maintenance activity, replace the 'git gc --auto [--quiet]' call with one to 'git maintenance run --auto [--quiet]'. As we extend the maintenance builtin with other steps, users will be able to select different maintenance activities. Rename run_auto_gc() to run_auto_maintenance() to be clearer what is happening on this call, and to expose all callers in the current diff. Rewrite the method to use a struct child_process to simplify the calls slightly. Since 'git fetch' already allows disabling the 'git gc --auto' subprocess, add an equivalent option with a different name to be more descriptive of the new behavior: '--[no-]maintenance'. Update the documentation to include these options at the same time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
In anticipation of implementing multiple maintenance tasks inside the 'maintenance' builtin, use a list of structs to describe the work to be done. The struct maintenance_task stores the name of the task (as given by a future command-line argument) along with a function pointer to its implementation and a boolean for whether the step is enabled. A list these structs are initialized with the full list of implemented tasks along with a default order. For now, this list only contains the "gc" task. This task is also the only task enabled by default. The run subcommand will return a nonzero exit code if any task fails. However, it will attempt all tasks in its loop before returning with the failure. Also each failed task will send an error message. Helped-by: Taylor Blau <me@ttaylorr.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The first new task in the 'git maintenance' builtin is the 'commit-graph' job. It is based on the sequence of events in the 'commit-graph' job in Scalar [1]. This sequence is as follows: 1. git commit-graph write --reachable --split 2. git commit-graph verify --shallow 3. If the verify succeeds, stop. 4. Delete the commit-graph-chain file. 5. git commit-graph write --reachable --split By writing an incremental commit-graph file using the "--split" option we minimize the disruption from this operation. The default behavior is to merge layers until the new "top" layer is less than half the size of the layer below. This provides quick writes most of the time, with the longer writes following a power law distribution. Most importantly, concurrent Git processes only look at the commit-graph-chain file for a very short amount of time, so they will verly likely not be holding a handle to the file when we try to replace it. (This only matters on Windows.) If a concurrent process reads the old commit-graph-chain file, but our job expires some of the .graph files before they can be read, then those processes will see a warning message (but not fail). This could be avoided by a future update to use the --expire-time argument when writing the commit-graph. By using 'git commit-graph verify --shallow' we can ensure that the file we just wrote is valid. This is an extra safety precaution that is faster than our 'write' subcommand. In the rare situation that the newest layer of the commit-graph is corrupt, we can "fix" the corruption by deleting the commit-graph-chain file and rewrite the full commit-graph as a new one-layer commit graph. This does not completely prevent _that_ file from being corrupt, but it does recompute the commit-graph by parsing commits from the object database. In our use of this step in Scalar and VFS for Git, we have only seen this issue arise because our microsoft/git fork reverted 43d3561 ("commit-graph write: don't die if the existing graph is corrupt" 2019-03-25) for a while to keep commit-graph writes very fast. We dropped the revert when updating to v2.23.0. The verify still has potential for catching corrupt data across the layer boundary: if the new file has commit X with parent Y in an old file but the commit ID for Y in the old file had a bitswap, then we will notice that in the 'verify' command. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/CommitGraphStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
A user may want to only run certain maintenance tasks in a certain order. Add the --task=<task> option, which allows a user to specify an ordered list of tasks to run. These cannot be run multiple times, however. Here is where our array of maintenance_task pointers becomes critical. We can sort the array of pointers based on the task order, but we do not want to move the struct data itself in order to preserve the hashmap references. We use the hashmap to match the --task=<task> arguments into the task struct data. Keep in mind that the 'enabled' member of the maintenance_task struct is a placeholder for a future 'maintenance.<task>.enabled' config option. Thus, we use the 'enabled' member to specify which tasks are run when the user does not specify any --task=<task> arguments. The 'enabled' member should be ignored if --task=<task> appears. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Performing maintenance on a Git repository involves writing data to the .git directory, which is not safe to do with multiple writers attempting the same operation. Ensure that only one 'git maintenance' process is running at a time by holding a file-based lock. Simply the presence of the .git/maintenance.lock file will prevent future maintenance. This lock is never committed, since it does not represent meaningful data. Instead, it is only a placeholder. If the lock file already exists, then fail silently. This will become very important later when we implement the 'fetch' task, as this is our stop-gap from creating a recursive process loop between 'git fetch' and 'git maintenance run'. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Currently, a normal run of "git maintenance run" will only run the 'gc' task, as it is the only one enabled. This is mostly for backwards- compatible reasons since "git maintenance run --auto" commands replaced previous "git gc --auto" commands after some Git processes. Users could manually run specific maintenance tasks by calling "git maintenance run --task=<task>" directly. Allow users to customize which steps are run automatically using config. The 'maintenance.<task>.enabled' option then can turn on these other tasks (or turn off the 'gc' task). Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The 'git maintenance run' command has an '--auto' option. This is used by other Git commands such as 'git commit' or 'git fetch' to check if maintenance should be run after adding data to the repository. Previously, this --auto option was only used to add the argument to the 'git gc' command as part of the 'gc' task. We will be expanding the other tasks to perform a check to see if they should do work as part of the --auto flag, when they are enabled by config. First, update the 'gc' task to perform the auto check inside the maintenance process. This prevents running an extra 'git gc --auto' command when not needed. It also shows a model for other tasks. Second, use the 'auto_condition' function pointer as a signal for whether we enable the maintenance task under '--auto'. For instance, we do not want to enable the 'fetch' task in '--auto' mode, so that function pointer will remain NULL. Now that we are not automatically calling 'git gc', a test in t5514-fetch-multiple.sh must be changed to watch for 'git maintenance' instead. We continue to pass the '--auto' option to the 'git gc' command when necessary, because of the gc.autoDetach config option changes behavior. Likely, we will want to absorb the daemonizing behavior implied by gc.autoDetach as a maintenance.autoDetach config option. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Instead of writing a new commit-graph in every 'git maintenance run --auto' process (when maintenance.commit-graph.enalbed is configured to be true), only write when there are "enough" commits not in a commit-graph file. This count is controlled by the maintenance.commit-graph.auto config option. To compute the count, use a depth-first search starting at each ref, and leaving markers using the PARENT1 flag. If this count reaches the limit, then terminate early and start the task. Otherwise, this operation will peel every ref and parse the commit it points to. If these are all in the commit-graph, then this is typically a very fast operation. Users with many refs might feel a slow-down, and hence could consider updating their limit to be very small. A negative value will force the step to run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
If you run fetch but record the result in remote-tracking branches, and either if you do nothing with the fetched refs (e.g. you are merely mirroring) or if you always work from the remote-tracking refs (e.g. you fetch and then merge origin/branchname separately), you can get away with having no FETCH_HEAD at all. Teach "git fetch" a command line option "--[no-]write-fetch-head" and "fetch.writeFetchHEAD" configuration variable. Without either, the default is to write FETCH_HEAD, and the usual rule that the command line option defeats configured default applies. Note that under "--dry-run" mode, FETCH_HEAD is never written; otherwise you'd see list of objects in the file that you do not actually have. Passing `--write-fetch-head` does not force `git fetch` to write the file. Also note that this option is explicitly passed when "git pull" internally invokes "git fetch", so that those who configured their "git fetch" not to write FETCH_HEAD would not be able to break the cooperation between these two commands. "git pull" must see what "git fetch" got recorded in FETCH_HEAD to work correctly. Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foregroudn 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a502 (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
One goal of background maintenance jobs is to allow a user to disable auto-gc (gc.auto=0) but keep their repository in a clean state. Without any cleanup, loose objects will clutter the object database and slow operations. In addition, the loose objects will take up extra space because they are not stored with deltas against similar objects. Create a 'loose-objects' task for the 'git maintenance run' command. This helps clean up loose objects without disrupting concurrent Git commands using the following sequence of events: 1. Run 'git prune-packed' to delete any loose objects that exist in a pack-file. Concurrent commands will prefer the packed version of the object to the loose version. (Of course, there are exceptions for commands that specifically care about the location of an object. These are rare for a user to run on purpose, and we hope a user that has selected background maintenance will not be trying to do foreground maintenance.) 2. Run 'git pack-objects' on a batch of loose objects. These objects are grouped by scanning the loose object directories in lexicographic order until listing all loose objects -or- reaching 50,000 objects. This is more than enough if the loose objects are created only by a user doing normal development. We noticed users with _millions_ of loose objects because VFS for Git downloads blobs on-demand when a file read operation requires populating a virtual file. This has potential of happening in partial clones if someone runs 'git grep' or otherwise evades the batch-download feature for requesting promisor objects. This step is based on a similar step in Scalar [1] and VFS for Git. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/LooseObjectsStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The loose-objects task deletes loose objects that already exist in a pack-file, then place the remaining loose objects into a new pack-file. If this step runs all the time, then we risk creating pack-files with very few objects with every 'git commit' process. To prevent overwhelming the packs directory with small pack-files, place a minimum number of objects to justify the task. The 'maintenance.loose-objects.auto' config option specifies a minimum number of loose objects to justify the task to run under the '--auto' option. This defaults to 100 loose objects. Setting the value to zero will prevent the step from running under '--auto' while a negative value will force it to run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The core.multiPackIndex setting has been around since c4d2522 (config: create core.multiPackIndex setting, 2018-07-12), but has been disabled by default. If a user wishes to use the multi-pack-index feature, then they must enable this config and run 'git multi-pack-index write'. The multi-pack-index feature is relatively stable now, so make the config option true by default. For users that do not use a multi-pack-index, the only extra cost will be a file lookup to see if a multi-pack-index file exists (once per process, per object directory). Also, this config option will be referenced by an upcoming "incremental-repack" task in the maintenance builtin, so move the config option into the repository settings struct. Note that if GIT_TEST_MULTI_PACK_INDEX=1, then we want to ignore the config option and treat core.multiPackIndex as enabled. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Now that the multi-pack-index may be written as part of auto maintenance at the end of a command, reduce the progress output when the operations are quick. Use start_delayed_progress() instead of start_progress(). Update t5319-multi-pack-index.sh to use GIT_PROGRESS_DELAY=0 now that the progress indicators are conditional. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. Each of the above steps update the multi-pack-index file. After each step, we verify the new multi-pack-index. If the new multi-pack-index is corrupt, then delete the multi-pack-index, rewrite it from scratch, and stop doing the later steps of the job. This is intended to be an extra-safe check without leaving a repo with many pack-files without a multi-pack-index. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
The incremental-repack task updates the multi-pack-index by deleting pack- files that have been replaced with new packs, then repacking a batch of small pack-files into a larger pack-file. This incremental repack is faster than rewriting all object data, but is slower than some other maintenance activities. The 'maintenance.incremental-repack.auto' config option specifies how many pack-files should exist outside of the multi-pack-index before running the step. These pack-files could be created by 'git fetch' commands or by the loose-objects task. The default value is 10. Setting the option to zero disables the task with the '--auto' option, and a negative value makes the task run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
407c123
to
a78dbc7
Compare
This is a second attempt at redesigning Git's repository maintenance patterns. The first attempt [1] included a way to run jobs in the background using a long-lived process; that idea was rejected and is not included in this series. A future series will use the OS to handle scheduling tasks.
[1] https://lore.kernel.org/git/pull.597.git.1585946894.gitgitgadget@gmail.com/
As mentioned before,
git gc
already plays the role of maintaining Git repositories. It has accumulated several smaller pieces in its long history, including:While expiring reflogs, clearing rererelogs, and deleting unreachable objects are suitable under the guise of "garbage collection", packing refs and updating the commit-graph file are not as obviously fitting. Further, these operations are "all or nothing" in that they rewrite almost all repository data, which does not perform well at extremely large scales. These operations can also be disruptive to foreground Git commands when
git gc --auto
triggers during routine use.This series does not intend to change what
git gc
does, but instead create new choices for automatic maintenance activities, of whichgit gc
remains the only one enabled by default.The new maintenance tasks are:
These tasks are all disabled by default, but can be enabled with config options or run explicitly using "git maintenance run --task=". There are additional config options to allow customizing the conditions for which the tasks run during the '--auto' option. ('fetch' will never run with the '--auto' option.)
Because 'gc' is implemented as a maintenance task, the most dramatic change of this series is to convert the 'git gc --auto' calls into 'git maintenance run --auto' calls at the end of some Git commands. By default, the only change is that 'git gc --auto' will be run below an additional 'git maintenance' process.
The 'git maintenance' builtin has a 'run' subcommand so it can be extended later with subcommands that manage background maintenance, such as 'start', 'stop', 'pause', or 'schedule'. These are not the subject of this series, as it is important to focus on the maintenance activities themselves.
An expert user could set up scheduled background maintenance themselves with the current series. I have the following
crontab
data set up to run maintenance on an hourly basis:My config includes all tasks except the 'gc' task. The hourly run is over-aggressive, but is sufficient for testing. I'll replace it with daily when I feel satisfied.
Hopefully this direction is seen as a positive one. My goal was to add more options for expert users, along with the flexibility to create background maintenance via the OS in a later series.
OUTLINE
Patches 1-4 remove some references to the_repository in builtin/gc.c before we start depending on code in that builtin.
Patches 5-7 create the 'git maintenance run' builtin and subcommand as a simple shim over 'git gc' and replaces calls to 'git gc --auto' from other commands.
Patches 8-15 create new maintenance tasks. These are the same tasks sent in the previous RFC.
Patches 16-21 create more customization through config and perform other polish items.
FUTURE WORK
Add 'start', 'stop', and 'schedule' subcommands to initialize the commands run in the background. You can see my progress towards this goal here: [RFC] Maintenance III: background maintenance #680
Split the 'gc' builtin into smaller maintenance tasks that are enabled by default, but might have different '--auto' conditions and more config options.
Replace config like 'gc.writeCommitGraph' and 'fetch.writeCommitGraph' with use of the 'commit-graph' task.
Update the builtin to use
struct repository *r
properly, especially when calling subcommands.UPDATES in V2
I'm sending this between v2.28.0-rc2 adn v2.28.0 as the release things have become a bit quiet.
The biggest disruption to the range-diff is that I removed the premature use of
struct repository *r
and instead continue to rely onthe_repository
. This means several patches were dropped that did prep work inbuiltin/gc.c
.I dropped the task hashmap and opted for a linear scan. This task list will always be too small to justify the extra complication of the hashmap.
struct maintenance_opts is properly static now.
Some tasks are renamed: fetch -> prefetch, pack-files -> incremental-repack.
With the rename, the prefetch task uses
refs/prefetch/
instead ofrefs/hidden/
.A trace2 region around the task executions are added.
UPDATES in V3
This series is now based on jk/strvec, as there are several places where I was adding new callers to argv_array_push* and run_command_v_opt() which have been replaced with strvec_push*() and run_command(), using a 'struct child_process'.
I added and updates Junio's patch from jc/no-update-fetch-head into the proper place before the 'prefetch' task. The 'prefetch' task now uses --no-write-fetch-head to avoid issues with FETCH_HEAD.
Since there were concerns around core.multiPackIndex, I added some extra care checking for that config being enabled. Since that already needed to adjust the config value from its existing implementation in midx.c, I added it to repo-settings and made it enabled by default.
Lots of feedback from the previous round. Thanks, all! I fully expect to have at least one more round of feedback, but things are improving quite a lot.
UPDATES in V4
The biggest change here is the use of "test_subcommand", based on Jonathan Nieder's approach. This requires having the exact command-line figured out, which now requires spelling out all --no- options. I also added a bunch of "2>/dev/null" checks because of the isatty(2) calls. Without that, the behavior will change depending on whether the test is run with -x/-v or without.
The 0x7FFF/0x7FFFFFFF constant problem is fixed with an EXPENSIVE test that verifies it.
The patches have been reordered so that the first half (patches 1-11) implement the primary behavior with only the 'gc' and 'commit-graph' tasks. The second half (patches 12-20) implement the 'prefetch', 'loose-objects', and 'incremental-repack' tasks. This could allow splitting the series into two parts more easily, if needed.
The option parsing has changed to use a local struct and pass that struct to the helper methods. This is instead of having a global singleton.
Thanks,
-Stolee
Cc: Johannes.Schindelin@gmx.de, sandals@crustytoothpaste.net, steadmon@google.com, jrnieder@gmail.com, peff@peff.net, congdanhqx@gmail.com, phillip.wood123@gmail.com, emilyshaffer@google.com, sluongng@gmail.com, jonathantanmy@google.com