-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make SignTool accept a list of files to be signed #396
Comments
Just to clarify, we are going to add this new functionality. From the discussion in #58 we decided to have:
|
Do we still need the explicit manifest? Passing list of files in an item group in to be build task should be sufficient for all scenarios, no? |
@jaredpar was advocating for it. Jared, do you still want to have the explicit manifest? |
I am a bit confused now. Is the usual
mean that we are going to keep what we have today, i.e., support for SignToolData.json and
the SignToolTask will accept an ItemGroup with list of *.nupkgs to sign. I want to make this issue + PR short. Thus, I'd like to scope it to only receiving the list of files to sign. Previous behavior will be maintained for now. Does that work for you? |
Yes. Unless @jaredpar consider is it not necessary anymore. The two ways to call the Sign task, as I understood from the last meeting, are:
For the 1st scenario, the SignTool is in charge of reading the list of files to sign and the list of files to exclude, creating an ItemGroup with:
=>No change to what we have today. For the 2nd scenario, the SignTool is in charge of looking at each directory. If there are NuGet packages, unpack them to extract the dll that needs to be signed, extract the strong name and the certificate and create an ItemGroup with that information. => Work to be done. At the end, from both entry points we have an ItemGroup with the list of files that need to be signed. @tmat please correct where neccesary |
Just to confirm some things:
|
The task itself accepts a list of files, not directories.
Containers (NuPkgs, VSIX) may contain other containers. The current task already handles that. |
Got it. Thanks.
Currently SignTool process containers recursively, but it does so only to find the files and all files will be signed with the same certificate + strong name. What I am thinking is that I'll need to do that search also, but for extracting the strong names + certificate(?), since each may use different values. What do you think? |
I don't understand what the problem is. Each file is signed separately. For managed .dlls, .exes the certificate to be used for each file is determined from the public key token of the assembly strong name. For VSIX and NuGet containers the certificate is given. In rare cases the default certificate that we associate with the PKT won't be the right one. If repo needs to override the certificate it would specify something like the following item group in its Versions.prop file:
The task would match PKT and TargetFramework against the respective values in assembly metadata. |
I'm fine with it being inferred as part of the build but the manifest should still be a visible artifact output of build. Having it around makes it trivial to audit how changes to build impacted the set of signed binaries.
Yes. The same nesting issue exists for VSIX. The tool is designed to handle this exact case. |
The build task now outputs the list of files it signs into binlog as messages. Is that sufficient? |
Is it done in a single location or spread out? What is really needed here is a way to concisely view the signing state of a build. If it's easy to see it as a single element in binlog that's great. But If i have to hunt through a bunch of different messages that makes it harder to validate. |
Single location |
How is that information passed? |
@JohnTortugo that would need to be determined. You would have to essentially specify a single one for each file type in a given directory. |
All VSIXes are signed by |
I think I've code for a PR but first I'd like to check a few things:
Questions:
|
|
Indeed that's the case at this time. It's not like managed binaries where we use different certificates for mysterious, but likely valid, reasons. Aren't native EXEs the same way: one certificate to rule them all? |
I'm having a hard time trying to use PEReader - barely any documentation.
which I assume is not acceptable due the |
Thanks a lot @rainersigwald! |
Does this only override the certificate? That is, the strong name will still be the one informed in @(StrongNameSignInfo) ? |
The key to use for strong name signing (the The only reason why we need to check TargetFrameworkAttribute is when 2 binaries that have the same strong name (which includes PKT) need to use a different Authenticode certificate for some reason. |
Can it occur that for some random binary we can't extract the TargetFramework? I'm thinking something like striped down version of the binary.. I've a file here (named Microsoft.VisualStudio.CodeCoverage.Shim.dll TFM=net461) that I've tried a few options and none worked. I am thinking if it's possible that the information isn't present in the file at all. |
Regarding having or not an exclusion list of files: after some changes I made today files nested in containers will be automatically found. Doesn't that make necessary (or desirable) to have some way to exclude some files from the signing process? Otherwise, we'll be signing files nested in many "system" and "microsoft" namespaces, like System., Microsoft.VisualStudio., etc. Do we need to do that? |
This feature is only needed for binaries built by the repo. Binaries not built by the repo should already be signed or excluded. We can make sure that all binaries built by the repo that need special cert have the attribute. Note that TargetFramework in FileSignInfo is optional. If not specified the info applies to any dll of the specified name and PKT. |
We need to avoid signing already signed assemblies. Checking that an assembly is already signed can be done using the PEReader. That said, you are right we might need to exclude some unsigned assemblies from signing. To do so, we can use
|
Thanks for answer. That make things much clearer. Only thing that I wanted to confirm is about checking if a random binary that I find in a package is from the current repo or not. That is, let's say a container file contains a file named "Foo.Bar.dll". How do I know if that file was produced by the current repo or not? My current idea is to check if there is a bin/Foo.Bar/{TFM}/Foo.Bar.dll file in the $(OutputDir) folder. Maybe we can even compare the content of the files. Does this approach make sense? |
You should not need to know. Either it's signed or not. If it is signed then there is nothing we need to do for this file. Otherwise, check if there is a FileSignInfo for the file. If so, use the cert specified by this info (if none then skip the file). Otherwise, use the default certificate based on matching StrongNameSignInfo. Otherwise, report an error. |
We shouldn't be touching files from bin directory at all, or any other directory. All the assets we need are already included in the containers. |
It seems there are some files in our current SignToolData.json that aren't contained in of our Arcade packages. What's the idea for handling that?
|
We don't want to sign or package test assemblies. |
@JohnTortugo is there an issue/pr tracking the consumption of your new changes in #410 in arcade? |
There is issue #464 where I plan to track the work of making Arcade use the refactored SignTool. I'll create a PR for it today. |
As per talk with @maririos she said that we are going to patch the SignToolTask to receive a list of directories where the SignTool will look for container files (only .nupkg for now) that need to be signed.
Cc: @maririos @tmat @weshaggard
The text was updated successfully, but these errors were encountered: