Skip to content

Conversation

neerajsi-msft
Copy link

@neerajsi-msft neerajsi-msft commented Feb 17, 2021

Writing an index 8K at a time invokes the OS filesystem and caching code
very frequently, introducing noticeable overhead while writing large
indexes. When experimenting with different write buffer sizes on Windows
writing the Windows OS repo index (260MB), most of the benefit came by
bumping the index write buffer size to 64K. I picked 128K to ensure that
we're past the knee of the curve.

With this change, the time under do_write_index for an index with 3M
files goes from ~1.02s to ~0.72s.

Signed-off-by: Neeraj Singh neerajsi@ntdev.microsoft.com

Note: This was previously discussed on the mailing list in 2016 at:
https://lore.kernel.org/git/1458350341-12276-1-git-send-email-dturner@twopensource.com/.

Since then, I believe we have a couple changes:

  • 'small' development platforms like raspberry pi have gotten larger (4GB RAM).
  • spectre and meltdown make individual system calls more expensive when mitigations are enabled
  • there have been many investments to make very large repos scale well in git, so huge repos are more common now.

cc: Jeff Hostetler git@jeffhostetler.com
cc: Neeraj Singh nksingh85@gmail.com
cc: Chris Torek chris.torek@gmail.com

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 17, 2021

Welcome to GitGitGadget

Hi @neerajsi-msft, and welcome to GitGitGadget, the GitHub App to send patch series to the Git mailing list from GitHub Pull Requests.

Please make sure that your Pull Request has a good description, as it will be used as cover letter.

Also, it is a good idea to review the commit messages one last time, as the Git project expects them in a quite specific form:

  • the lines should not exceed 76 columns,
  • the first line should be like a header and typically start with a prefix like "tests:" or "commit:", and
  • the commit messages' body should be describing the "why?" of the change.
  • Finally, the commit messages should end in a Signed-off-by: line matching the commits' author.

It is in general a good idea to await the automated test ("Checks") in this Pull Request before contributing the patches, e.g. to avoid trivial issues such as unportable code.

Contributing the patches

Before you can contribute the patches, your GitHub username needs to be added to the list of permitted users. Any already-permitted user can do that, by adding a comment to your PR of the form /allow. A good way to find other contributors is to locate recent pull requests where someone has been /allowed:

Both the person who commented /allow and the PR author are able to /allow you.

An alternative is the channel #git-devel on the FreeNode IRC network:

<newcontributor> I've just created my first PR, could someone please /allow me? https://github.com/gitgitgadget/git/pull/12345
<veteran> newcontributor: it is done
<newcontributor> thanks!

Once on the list of permitted usernames, you can contribute the patches to the Git mailing list by adding a PR comment /submit.

If you want to see what email(s) would be sent for a /submit request, add a PR comment /preview to have the email(s) sent to you. You must have a public GitHub email address for this.

After you submit, GitGitGadget will respond with another comment that contains the link to the cover letter mail in the Git mailing list archive. Please make sure to monitor the discussion in that thread and to address comments and suggestions (while the comments and suggestions will be mirrored into the PR by GitGitGadget, you will still want to reply via mail).

If you do not want to subscribe to the Git mailing list just to be able to respond to a mail, you can download the mbox from the Git mailing list archive (click the (raw) link), then import it into your mail program. If you use GMail, you can do this via:

curl -g --user "<EMailAddress>:<Password>" \
    --url "imaps://imap.gmail.com/INBOX" -T /path/to/raw.txt

To iterate on your change, i.e. send a revised patch or patch series, you will first want to (force-)push to the same branch. You probably also want to modify your Pull Request description (or title). It is a good idea to summarize the revision by adding something like this to the cover letter (read: by editing the first comment on the PR, i.e. the PR description):

Changes since v1:
- Fixed a typo in the commit message (found by ...)
- Added a code comment to ... as suggested by ...
...

To send a new iteration, just add another PR comment with the contents: /submit.

Need help?

New contributors who want advice are encouraged to join git-mentoring@googlegroups.com, where volunteers who regularly contribute to Git are willing to answer newbie questions, give advice, or otherwise provide mentoring to interested contributors. You must join in order to post or view messages, but anyone can join.

You may also be able to find help in real time in the developer IRC channel, #git-devel on Freenode. Remember that IRC does not support offline messaging, so if you send someone a private message and log out, they cannot respond to you. The scrollback of #git-devel is archived, though.

@dscho
Copy link
Member

dscho commented Feb 18, 2021

/allow

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2021

User neerajsi-msft is now allowed to use GitGitGadget.

WARNING: neerajsi-msft has no public email address set on GitHub

@neerajsi-msft
Copy link
Author

/preview

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2021

Error: Could not determine public email of neerajsi-msft

@dscho
Copy link
Member

dscho commented Feb 18, 2021

Error: Could not determine public email of neerajsi-msft

This means that the GitHub profile does not show your email address publicly. GitGitGadget needs this, though (at least for the moment) to be able to Cc: the cover letter to you.

@neerajsi-msft
Copy link
Author

/preview

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2021

Preview email sent as pull.877.git.1613613918861.gitgitgadget@gmail.com

@neerajsi-msft
Copy link
Author

@dscho Thanks for helping me out! I'm going to submit.

@neerajsi-msft
Copy link
Author

/submit

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 18, 2021

Submitted as pull.877.git.1613616506949.gitgitgadget@gmail.com

To fetch this version into FETCH_HEAD:

git fetch https://github.com/gitgitgadget/git pr-877/neerajsi-msft/neerajsi/index-buffer-v1

To fetch this version to local tag pr-877/neerajsi-msft/neerajsi/index-buffer-v1:

git fetch --no-tags https://github.com/gitgitgadget/git tag pr-877/neerajsi-msft/neerajsi/index-buffer-v1

@derrickstolee
Copy link

Thanks, @dscho, for /allowing before I was able to. And thanks, @neerajsi-msft for doing the deep investigation here. I appreciate the details you included.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 19, 2021

On the Git mailing list, Jeff Hostetler wrote (reply to this):



On 2/17/21 9:48 PM, Neeraj K. Singh via GitGitGadget wrote:
> From: Neeraj Singh <neerajsi@ntdev.microsoft.com>
> 
> Writing an index 8K at a time invokes the OS filesystem and caching code
> very frequently, introducing noticeable overhead while writing large
> indexes. When experimenting with different write buffer sizes on Windows
> writing the Windows OS repo index (260MB), most of the benefit came by
> bumping the index write buffer size to 64K. I picked 128K to ensure that
> we're past the knee of the curve.
> 
> With this change, the time under do_write_index for an index with 3M
> files goes from ~1.02s to ~0.72s.

[...]

>   
> -#define WRITE_BUFFER_SIZE 8192
> +#define WRITE_BUFFER_SIZE (128 * 1024)
>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
>   static unsigned long write_buffer_len;

[...]

Very nice.

I can confirm that this gives nice gains on Windows.  (I'm using
the Office repo which has a 188MB index file (2.1M files at HEAD).
Running "git status" shows a gain of about 200ms.

We get a smaller gain on Mac of about 50ms (again, using the Office
repo).

So, you may add my sign-off or ACK to this.
     Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>



FWIW, You might take a look at `t/perf/p0007-write-cache.sh`
Update it as follows:

```
diff --git a/t/perf/p0007-write-cache.sh b/t/perf/p0007-write-cache.sh
index 09595264f0..337280ff1c 100755
--- a/t/perf/p0007-write-cache.sh
+++ b/t/perf/p0007-write-cache.sh
@@ -4,7 +4,8 @@ test_description="Tests performance of writing the index"

  . ./perf-lib.sh

-test_perf_default_repo
+test_perf_large_repo

  test_expect_success "setup repo" '
         if git rev-parse --verify refs/heads/p0006-ballast^{commit}
```


Then you can run it like this:

     $ cd t/perf
     $ GIT_PERF_LARGE_REPO=/path/to/your/enlistment ./p0007-write-cache

Then you can run it with the small and then with the large buffer and
get times for essentially just the index write in isolation.

Hope this helps,
Jeff

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 19, 2021

User Jeff Hostetler <git@jeffhostetler.com> has been added to the cc: list.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 20, 2021

On the Git mailing list, Junio C Hamano wrote (reply to this):

Jeff Hostetler <git@jeffhostetler.com> writes:

> On 2/17/21 9:48 PM, Neeraj K. Singh via GitGitGadget wrote:
>> From: Neeraj Singh <neerajsi@ntdev.microsoft.com>
>> Writing an index 8K at a time invokes the OS filesystem and caching
>> code
>> very frequently, introducing noticeable overhead while writing large
>> indexes. When experimenting with different write buffer sizes on Windows
>> writing the Windows OS repo index (260MB), most of the benefit came by
>> bumping the index write buffer size to 64K. I picked 128K to ensure that
>> we're past the knee of the curve.
>> With this change, the time under do_write_index for an index with 3M
>> files goes from ~1.02s to ~0.72s.
>
> [...]
>
>>   -#define WRITE_BUFFER_SIZE 8192
>> +#define WRITE_BUFFER_SIZE (128 * 1024)
>>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
>>   static unsigned long write_buffer_len;
>
> [...]
>
> Very nice.

I wonder if we gain more by going say 4M buffer size or even larger?

Is this something we can make the system auto-tune itself?  This is
not about reading but writing, so we already have enough information
to estimate how much we would need to write out.

Thanks.

Writing an index 8K at a time invokes the OS filesystem and caching code
very frequently, introducing noticeable overhead while writing large
indexes. When experimenting with different write buffer sizes on Windows
writing the Windows OS repo index (260MB), most of the benefit came by
bumping the index write buffer size to 64K. I picked 128K to ensure that
we're past the knee of the curve.

With this change, the time under do_write_index for an index with 3M
files goes from ~1.02s to ~0.72s.

Signed-off-by: Neeraj Singh <neerajsi@microsoft.com>
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
@gitgitgadget
Copy link

gitgitgadget bot commented Feb 20, 2021

On the Git mailing list, Neeraj Singh wrote (reply to this):

On Fri, Feb 19, 2021 at 11:46 PM Junio C Hamano <gitster@pobox.com> wrote:
>
> Jeff Hostetler <git@jeffhostetler.com> writes:
>
> > On 2/17/21 9:48 PM, Neeraj K. Singh via GitGitGadget wrote:
> >> From: Neeraj Singh <neerajsi@ntdev.microsoft.com>
> >> Writing an index 8K at a time invokes the OS filesystem and caching
> >> code
> >> very frequently, introducing noticeable overhead while writing large
> >> indexes. When experimenting with different write buffer sizes on Windows
> >> writing the Windows OS repo index (260MB), most of the benefit came by
> >> bumping the index write buffer size to 64K. I picked 128K to ensure that
> >> we're past the knee of the curve.
> >> With this change, the time under do_write_index for an index with 3M
> >> files goes from ~1.02s to ~0.72s.
> >
> > [...]
> >
> >>   -#define WRITE_BUFFER_SIZE 8192
> >> +#define WRITE_BUFFER_SIZE (128 * 1024)
> >>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
> >>   static unsigned long write_buffer_len;
> >
> > [...]
> >
> > Very nice.
>
> I wonder if we gain more by going say 4M buffer size or even larger?
>
> Is this something we can make the system auto-tune itself?  This is
> not about reading but writing, so we already have enough information
> to estimate how much we would need to write out.
>
> Thanks.
>

Hi Junio,
At some point the cost of the memcpy into the filesystem cache begins to
dominate the cost of the system call, so increasing the buffer size
has diminishing returns.

An alternate approach would be to mmap the index file we are trying to
write and thereby
copy the data directly into the filesystem cache pages.  That's a much
more difficult change to
make and verify, so I'd rather leave that as an exercise to the reader
for now :).

Thanks,
-Neeraj

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 20, 2021

User Neeraj Singh <nksingh85@gmail.com> has been added to the cc: list.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 21, 2021

On the Git mailing list, Junio C Hamano wrote (reply to this):

Neeraj Singh <nksingh85@gmail.com> writes:

>> >>   -#define WRITE_BUFFER_SIZE 8192
>> >> +#define WRITE_BUFFER_SIZE (128 * 1024)
>> >>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
>> >>   static unsigned long write_buffer_len;
>> >
>> > [...]
>> >
>> > Very nice.
>>
>> I wonder if we gain more by going say 4M buffer size or even larger?
>>
>> Is this something we can make the system auto-tune itself?  This is
>> not about reading but writing, so we already have enough information
>> to estimate how much we would need to write out.
>>
>> Thanks.
>>
>
> Hi Junio,
> At some point the cost of the memcpy into the filesystem cache begins to
> dominate the cost of the system call, so increasing the buffer size
> has diminishing returns.

Yes, I know that kind of "general principle".  

If I recall correctly, we used to pass too large a buffer to a
single write(2) system call (I do not know if it was for the
index---I suspect it was for some other data), and found out that it
made response to ^C take too long, and tuned the buffer size down.

I was asking where the sweet spot for this codepath would be, and if
we can take a measurement to make a better decision than "8k feels
too small and 128k turns out to be better than 8k".  It does not
tell us if 128k would always do better than 64k or 256k, for
example.

I suspect that the sweet spot would be dependent on many parameters
(not just the operating system, but also relative speed among
memory, "disk", and cpu, and also the size of the index) and if we
can devise a way to auto-tune it so that we do not have to worry
about it.

Thanks.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 24, 2021

On the Git mailing list, Neeraj Singh wrote (reply to this):

On Sun, Feb 21, 2021 at 4:51 AM Junio C Hamano <gitster@pobox.com> wrote:
>
> Neeraj Singh <nksingh85@gmail.com> writes:
>
> >> >>   -#define WRITE_BUFFER_SIZE 8192
> >> >> +#define WRITE_BUFFER_SIZE (128 * 1024)
> >> >>   static unsigned char write_buffer[WRITE_BUFFER_SIZE];
> >> >>   static unsigned long write_buffer_len;
> >> >
> >> > [...]
> >> >
> >> > Very nice.
> >>
> >> I wonder if we gain more by going say 4M buffer size or even larger?
> >>
> >> Is this something we can make the system auto-tune itself?  This is
> >> not about reading but writing, so we already have enough information
> >> to estimate how much we would need to write out.
> >>
> >> Thanks.
> >>
> >
> > Hi Junio,
> > At some point the cost of the memcpy into the filesystem cache begins to
> > dominate the cost of the system call, so increasing the buffer size
> > has diminishing returns.
>
> Yes, I know that kind of "general principle".
>
> If I recall correctly, we used to pass too large a buffer to a
> single write(2) system call (I do not know if it was for the
> index---I suspect it was for some other data), and found out that it
> made response to ^C take too long, and tuned the buffer size down.
>
> I was asking where the sweet spot for this codepath would be, and if
> we can take a measurement to make a better decision than "8k feels
> too small and 128k turns out to be better than 8k".  It does not
> tell us if 128k would always do better than 64k or 256k, for
> example.
>
> I suspect that the sweet spot would be dependent on many parameters
> (not just the operating system, but also relative speed among
> memory, "disk", and cpu, and also the size of the index) and if we
> can devise a way to auto-tune it so that we do not have to worry
> about it.
>
> Thanks.

I think the main concern on a reasonably-configured machine is the speed
of memcpy and the cost of the code to get to that memcpy (syscall, file system
free space allocator, page allocator, mapping from file offset to cache page).
Disk shouldn't matter, since we write the file with OS buffering and
buffer flushing
will happen asynchronously some time after the git command completes.

If we think about doing the fastest possible memcpy, I think we want to aim for
maximizing the use of the CPU cache.  A write buffer that's too big would result
in most of the data being flushed to DRAM between when git writes it and the
OS reads it.  L1 caches are typically ~32K and L2 caches are on the
order of 256K.
We probably don't want to exceed the size of the L2 cache, and we
should actually
leave some room for OS code and data, so 128K is a good number from
that perspective.

I collected data from an experiment with different buffer sizes on Windows on my
3.6Ghz Xeon W-2133 machine:
https://docs.google.com/spreadsheets/d/1Bu6pjp53NPDK6AKQI_cry-hgxEqlicv27dptoXZYnwc/edit?usp=sharing

The timing is pretty much in the noise after we pass 32K.  So I think
8K is too small, but
given the flatness of the curve we can feel good about any value above
32K from a performance
perspective.  I still think 128K is a decent number that won't likely
need to be changed for
some time.

Thanks,
-Neeraj

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

This branch is now known as ns/raise-write-index-buffer-size.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

This patch series was integrated into seen via git@e8f7cbe.

@gitgitgadget gitgitgadget bot added the seen label Feb 25, 2021
@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

On the Git mailing list, Junio C Hamano wrote (reply to this):

Neeraj Singh <nksingh85@gmail.com> writes:

> If we think about doing the fastest possible memcpy, I think we want to aim for
> maximizing the use of the CPU cache.  A write buffer that's too big would result
> in most of the data being flushed to DRAM between when git writes it and the
> OS reads it.  L1 caches are typically ~32K and L2 caches are on the
> order of 256K.
> We probably don't want to exceed the size of the L2 cache, and we
> should actually
> leave some room for OS code and data, so 128K is a good number from
> that perspective.
>
> I collected data from an experiment with different buffer sizes on Windows on my
> 3.6Ghz Xeon W-2133 machine:
> https://docs.google.com/spreadsheets/d/1Bu6pjp53NPDK6AKQI_cry-hgxEqlicv27dptoXZYnwc/edit?usp=sharing
>
> The timing is pretty much in the noise after we pass 32K.  So I think
> 8K is too small, but
> given the flatness of the curve we can feel good about any value above
> 32K from a performance
> perspective.  I still think 128K is a decent number that won't likely
> need to be changed for
> some time.

Thanks for a supporting graph.

I can very well imagine that it would have been tempting to instead
say "after we pass 128k" while explaining exactly the same graph,
and doing so would have given a more coherent argument to support
the choice of 128k the patch made.  You knew that a "then perhaps we
can reclaim 96k by sizing the buffer down a bit?" would become a
reasonable response, but you still chose to be honest, which I kinda
like ;-)



@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

On the Git mailing list, Chris Torek wrote (reply to this):

> Neeraj Singh <nksingh85@gmail.com> writes:
> > I collected data from an experiment with different buffer sizes on Windows on my
> > 3.6Ghz Xeon W-2133 machine:
> > https://docs.google.com/spreadsheets/d/1Bu6pjp53NPDK6AKQI_cry-hgxEqlicv27dptoXZYnwc/edit?usp=sharing
> >
> > The timing is pretty much in the noise after we pass 32K.  So I think
> > 8K is too small, but
> > given the flatness of the curve we can feel good about any value above
> > 32K from a performance
> > perspective.  I still think 128K is a decent number that won't likely
> > need to be changed for
> > some time.

Linux/BSD/etc `stat` system calls report st_blksize values to tell
user code the optimal size for read and write calls.  Does Windows
have one?  (It's not POSIX but is XSI.)

(How *well* the OS reports `st_blksize` is another question
entirely, but at least if the report says, say, 128k, and that's
wrong, that's no longer Git's fault. :-) )

On Wed, Feb 24, 2021 at 10:46 PM Junio C Hamano <gitster@pobox.com> wrote:
> Thanks for a supporting graph.
>
> I can very well imagine that it would have been tempting to instead
> say "after we pass 128k" while explaining exactly the same graph,
> and doing so would have given a more coherent argument to support
> the choice of 128k the patch made.  You knew that a "then perhaps we
> can reclaim 96k by sizing the buffer down a bit?" would become a
> reasonable response, but you still chose to be honest, which I kinda
> like ;-)

128K is correct for ZFS; 64K is typically correct for UFS2; 8K is
the old UFS1 size.  Anything under that has been too small for
a long time. :-)

Chris

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

User Chris Torek <chris.torek@gmail.com> has been added to the cc: list.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

On the Git mailing list, Junio C Hamano wrote (reply to this):

Chris Torek <chris.torek@gmail.com> writes:

> Linux/BSD/etc `stat` system calls report st_blksize values to tell
> user code the optimal size for read and write calls.  Does Windows
> have one?  (It's not POSIX but is XSI.)
>
> (How *well* the OS reports `st_blksize` is another question
> entirely, but at least if the report says, say, 128k, and that's
> wrong, that's no longer Git's fault. :-) )
> ...
> 128K is correct for ZFS; 64K is typically correct for UFS2; 8K is
> the old UFS1 size.  Anything under that has been too small for
> a long time. :-)

That's rather tempting.  After opening a locked index to write
things out, the value is a single fstat() away...

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

On the Git mailing list, Neeraj Singh wrote (reply to this):

On Wed, Feb 24, 2021 at 11:16 PM Junio C Hamano <gitster@pobox.com> wrote:
>
> Chris Torek <chris.torek@gmail.com> writes:
>
> > Linux/BSD/etc `stat` system calls report st_blksize values to tell
> > user code the optimal size for read and write calls.  Does Windows
> > have one?  (It's not POSIX but is XSI.)
> >
> > (How *well* the OS reports `st_blksize` is another question
> > entirely, but at least if the report says, say, 128k, and that's
> > wrong, that's no longer Git's fault. :-) )
> > ...
> > 128K is correct for ZFS; 64K is typically correct for UFS2; 8K is
> > the old UFS1 size.  Anything under that has been too small for
> > a long time. :-)
>
> That's rather tempting.  After opening a locked index to write
> things out, the value is a single fstat() away...
>

From a quick perusal of freebsd, st_blksize seems to be the system
PAGE_SIZE by default (4k most of the time, I assume). The Windows
equivalent of this value is really tuned to what you want to send down
when bypassing the cache (to avoid partial cluster/stripe writes).

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html
doesn't elicit much confidence. The units of st_blksize aren't even
defined.

Thanks,
Neeraj

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 25, 2021

On the Git mailing list, Chris Torek wrote (reply to this):

On Wed, Feb 24, 2021 at 11:36 PM Neeraj Singh <nksingh85@gmail.com> wrote:
> From a quick perusal of freebsd, st_blksize seems to be the system
> PAGE_SIZE by default (4k most of the time, I assume). The Windows
> equivalent of this value is really tuned to what you want to send down
> when bypassing the cache (to avoid partial cluster/stripe writes).

It's page-size for pipes, sockets, etc., but for real files, it's based on
a report from the underlying file system.  It's actually 8k on a typical
ancient UFS file system, 64K on UFS2, and 128K on ZFS, on FreeBSD.


> https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html
> doesn't elicit much confidence. The units of st_blksize aren't even
> defined.

Despite POSIX's rather obstreperous definition of st_blksize, the
units are actually just bytes, in practice.

Chris

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 26, 2021

This patch series was integrated into seen via git@f577814.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 26, 2021

This patch series was integrated into seen via git@b55ea45.

@gitgitgadget
Copy link

gitgitgadget bot commented Feb 26, 2021

This patch series was integrated into next via git@8f43f67.

@gitgitgadget gitgitgadget bot added the next label Feb 26, 2021
@gitgitgadget
Copy link

gitgitgadget bot commented Feb 27, 2021

This patch series was integrated into seen via git@07c0a2e.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 1, 2021

This patch series was integrated into seen via git@ada7c5f.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 1, 2021

This patch series was integrated into next via git@ada7c5f.

@gitgitgadget
Copy link

gitgitgadget bot commented Mar 1, 2021

This patch series was integrated into master via git@ada7c5f.

@gitgitgadget gitgitgadget bot added the master label Mar 1, 2021
@gitgitgadget
Copy link

gitgitgadget bot commented Mar 1, 2021

Closed via ada7c5f.

@gitgitgadget gitgitgadget bot closed this Mar 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants