Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gpg self-signature verification support #257

Merged
merged 39 commits into from
Mar 22, 2019

Conversation

lukpueh
Copy link
Member

@lukpueh lukpueh commented Feb 28, 2019

I apologize for the size of this PR, but separating the commits into more (inter-dependent) PRs would only increase the PR handling effort and still leave at least one PR with a large diff. Let me know if there is anything I can do to facilitate code review. A guide to this PR is provided below.

Fixes issue #:

Description of the changes being introduced by the pull request:
This PR adds signature verification on gpg public key export for two types of self-signatures:

  • Self-certificates related to the primary key, which can be used to acquire additional certified information about the the primary key, such as the key expiration date.
  • Subkey binding signatures, that bind each subkey to the primary key. So far we just assumed that a subkey attached to a primary key belongs to the primary key without verifying the binding signature. This PR discards all subkey that aren't verifiably bound to the primary key.

The lion share of this PR is provided by d3b6c44, which updates gpg public key parsing to verify the corresponding self-signatures.
Suggestions to re-organize the relevant functions, i.e. get_pubkey_bundle , parse_pubkey_bundle, _assign_certified_key_info and _get_verified_subkeys, are appreciated.

So far we only supported gpg binary type signatures (i.e. signatures over in-toto metadata) and restricted signature parsing accordingly. b7ce963 updates signature parsing to support the self-signature types described above.

For binary type signature creation and verification we only support SHA256. 713929b, 9388560 add changes to support SHA1, and SHA512 too (for self-signatures only!), which, based on real-word keyring inspections, seem to be widely-used.

4181469 updates signature parsing to return additional information, which is needed for further signature handling (as part of this PR), and also to get additional information about the primary key, such as the expiration date (required for #245). Note that this PR only provides certificate verification support, while the information extraction should be handled in #245 (see inline # TODO: comment).

Further tangential clean-up/refactor commits include:

  • 26c1fbf to remove an obsolete else clause
  • ae4f5d0 to enhance some related error messages and fix comments and docstrings
  • 4d825be to slightly modify signature keyid handling

Caveats/TODOs

Please verify and check that the pull request fulfills the following
requirements
:

  • The code follows the Code Style Guidelines
  • Tests have been added for the bug fix or new feature
  • Docs have been added for the bug fix or new feature

Support verification of gpg signatures that use SHA1 and SHA512.
Together with SHA256, these seem to be the most widely used
hashing algorithms for signatures over keys, such as
self-signatures.
Use the following command to see the distribution of hash
algos on your local gpg keyring (2, 8 and 10 should stick out).

gpg --export | gpg --list-packets | grep ":signature packet:" -a2 |\
grep -o "digest algo [0-9]*" | sort | uniq -c

See RFC4880 9.4 for the list of all hash algorithm IDs.

We continue to use SHA256 exclusively for metadata signing
and verification. This addition is only needed to support
self-signature verification.
Add optional supported_hash_algorithms argument to
parse_signature_packet to support parsing signature packets
with other hash algos. So far only SHA256 was supported, which
remains default.
Add optional supported_signature_types argument to
parse_signature_packet to support parsing signature packets
of other types, e.g. different types of self-signatures.
So far only SIGNATURE_TYPE_BINARY packets were supported, which
remains the default.
Update parse_signature_packet to add an optional opaque info
dictionary to the returned signature. Whether it is added or
not may be configured via boolean "include_info" parameter. Per
default it is not added.

Currently the info dictionary contains the signature_type and
hash_algorithm and an empty list that may be used to add
subpackets such as a key expiration date.
The corresponding subpacket parsing code is already there
(commented out).
Update parse_signature_packet to only return the optional
short_keyid if it was parsed from the signature packet to (better)
match in_toto.gpg.formats.SIGNATURE_SCHEMA.

Note that the returned signature still might fail the schema check,
due to an empty keyid, but it is the responsibility of the caller
to adjust accordingly.

This commit also updates comments and calling functions.
This commit modifies the gpg public key parsing function to parse
additional packets, including User ID and User Attribute packets,
as well as signature packets corresponding to any of the other
parsed packets.

Additionally, the commit adds a helper function to verify
self-certificates that may carry information about the primary key,
such as key expiration date.

And it adds a helper function to verify sub key binding signatures
and updates the public key export function to only return subkeys
that are verifiably bound to the primary key.
raise ValueError("Hash algorithm '{}' not supported, must be one of '{}' "
"(see RFC4880 9.4. Hash Algorithms).".format(hash_algorithm_id,
{in_toto.gpg.constants.SHA1, in_toto.gpg.constants.SHA256,
in_toto.gpg.constants.SHA512}))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is never extended to support other hash types, my comment is moot, but there's a perhaps more concise way that avoids the if cascade:

  supported_hashing_algorithms = [in_toto.gpg.constants.SHA1,
      in_toto.gpg.constants.SHA256, in_toto.gpg.constants.SHA512]

  corresponding_hashing_classes = [hashing.SHA1, hashing.SHA256, hashing.SHA512]

  hashing_class = dict(zip(supported_hashing_algorithms, corresponding_hashing_classes))

  try:
    return hashing_class[hash_algorihm_id]
  except KeyError:
    raise ValueError("Hash algorithm '{}' not supported, must be one of '{}' "
        "(see RFC4880 9.4. Hash Algorithms).".format(hash_algorithm_id,
        supported_hashing_algorithms))

(And had 9388560 not dropped SUPPORTED_HASH_ALGORITHMS from gpg/constants.py, the proposed code could be even DRYer.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opted for the if/else variant because it seemed a tad easier to read, and I am not sure if we need to support other algos in the near future. But your approach is definitely more elegant.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to suggest something similar. I'm fine either way though.

elif packet_type == PACKET_TYPE_PRIMARY_KEY and \
key_bundle[PACKET_TYPE_PRIMARY_KEY]["key"]:
raise Exception("Unexpected primary key.")

Copy link

@aaaaalbert aaaaalbert Feb 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always find if cascades with complex causes difficult to read, as I'm never sure if I'm not missing a combination of conditions that's not handled. (I do think however that L193-L251 make sense.)

The elif in L198-199 could be rolled into L205+ as a sub-if clause.

Copy link
Member Author

@lukpueh lukpueh Feb 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, that's why I added comprehensive comments.

The elif in L198-199 could be rolled into L205+ as a sub-if clause.

That's how I wrote it initially, but then I thought it would be more readable the way it is now, as it is less arrow-head'ish.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arrow-head'ish

Hmm, like, two levels in doesn't feel that spiky to me :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be 4 levels already: while, try, elif, if ... 5 if you count the def... :P

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...which is technically correct, but I think the def and while and try only count as overarching brackets. So: two ;-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, it's only two (...technically). I'll still keep this one.

# - disallow duplicate packets
if packet_type != PACKET_TYPE_PRIMARY_KEY and \
not key_bundle[PACKET_TYPE_PRIMARY_KEY]["key"]:
raise Exception("First packet must be a primary key ('{}'), got '{}'."

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please raise more specific error types than plain Exceptions. (See also L200)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one (and L200) are caught and re-raised with more information as specific PacketParsingErrors in L254, adding additional information about the packet position.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, got it. I wonder if it makes sense to raise the PacketParsingError on that level already, just so as to not catch other potential exceptions and report them in a confusing way.

(Then L254 would only except PacketParsingError and act upon it, no other unrelated exceptions.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense!

PACKET_TYPE_USER_ATTR, PACKET_TYPE_SUB_KEY,
PACKET_TYPE_SIGNATURE]))

except Exception as e:
Copy link

@aaaaalbert aaaaalbert Feb 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a bunch of except Exceptions across this PR. I wonder why the blanket except is desired (instead of a more specific exception type), and why it makes sense in a few places to continue after logging the exception.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! I do have this concern somewhere in my notes and I meant to add it to the PR description's "caveats/todos" section, as this issue applies to other places in the gpg module too, and I plan on addressing it in a separate PR. Thanks for pointing it out!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re continue: I considered that question too and decided that in some cases it is okay to continue the overall key export but ignore a self-certificate, if it

  • belongs to a malformed User ID packet (L307),
  • cannot be parsed (L315),
  • is invalid (L337),

and, ignore a subkey, if it

  • cannot be parsed (L386, L389),
  • has a corresponding binding signature that cannot be parsed (L400),
  • has not exactly one binding signature (L420), or
  • the binding signature is invalid (L430).

But maybe it makes sense too do abort, since these errors are a sign that something is not fully supported or fishy.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be addressed in #126 (comment).


# If Bit 6 of 1st octet is set we parse a New Format Packet Length, and
# an Old Format Packet Lengths otherwise
if data[0] & 0x40: # pragma: no cover

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using binary representations for this and L132, 0b0100_0000 and 0b0011_1111 and such. (I think this is what they're here for :-)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea, will do.

@lukpueh
Copy link
Member Author

lukpueh commented Feb 28, 2019

Thanks for your comments @aaaaalbert, they are very much appreciated! ❤️❤️❤️

Do you have any advice regarding the organization of get_pubkey_bundle , parse_pubkey_bundle, _assign_certified_key_info and _get_verified_subkeys? I had this as one long spaghetti function initially, but that was too much to bear, so I split them into separate functions, but the boundaries feel a little arbitrary. Any thoughts?

@aaaaalbert
Copy link

My pleasure! 😄


Re organizing these functions, I think it's very sensible to have them separated even if they are called from a single place only. Each is moderately complicated, each has side- and sub-conditions in its checks, there must be tons of docs in each.... I'm totally fine with the way you did this!

The only thing that comes to mind is the choice of where exactly you collate the return values of the private / sub-functions into what the caller uses. For example, parse_... combines a few values; meanwhile get_... could do the same right here with similar effort, assuming that it got parse_...'s key_bundle instead of the tuple it actually returns.

This isn't super-important, it's just to consider the time spent for someone who reads the code and has to trace function calls.

@lukpueh
Copy link
Member Author

lukpueh commented Feb 28, 2019

@aaaaalbert, funny that you would mention this. Changing the code from the way you are proposing to what it is now was one of the last things I did on my WIP branch.

The (probably only semi-compelling) reason why I'm calling the helper functions from within parse_... is that I didn't want to expose the raw key bundle (i.e. primary key, user id, user attribute, and sub key packets plus their respective signature packets) in order to not have it confused it with the key bundle (i.e. parsed primary key with certified additional info and parsed and bound subkeys) that's returned from get_... eventually.

But parse_... could of course be seen as just another helper function that returns a private data structure to be used by other helper functions.

@aaaaalbert
Copy link

That's interesting! Any way is fine though, I suggest no change.

Fix unsubscriptable-object error on Python2.7 that occurs due to a
dict operation on a variable that gets initialized as None first
and then must get re-initialized as dict inside a loop.
@lukpueh lukpueh changed the title Add gpg self sigs Add gpg self-signature verification support Feb 28, 2019
one of SHA1, SHA256, SHA512 (see in_toto.gpg.constants) used to
verify the signature. Default is SHA256.
NOTE: The hash algorithms specified in "pubkey_info"'s
"hashes" or "method" field are currently not considered.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious as to why we want to do this instead of updating our signature-type class?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "signature-type class"? The information that's on the pubkey, i.e. hashes and method? If yes, than the reason is that that's the hashes/method we support for binary signatures, i.e. signatures over documents, which is the main purpose of the keys, and which still is only sha256.

I added sha1 and sha512 for self-signatures only, which, IIRC, is something we agreed on in a different conversation. Does this make sense, or did I misunderstand your question?

@@ -187,12 +250,12 @@ def parse_subpackets(subpacket_octets):
length = subpacket_octets[ptr]
ptr += 1

if length > 192 and length < 255 : # pragma: no cover
if length >= 192 and length < 255 : # pragma: no cover
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, thanks for correcting this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minimal style issue: No leading space for the : here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SantiagoTorres, it only got to my attention because I have a key in my system keyring, where that byte is exactly 192. :)
@aaaaalbert, thanks!

raise ValueError("Hash algorithm '{}' not supported, must be one of '{}' "
"(see RFC4880 9.4. Hash Algorithms).".format(hash_algorithm_id,
{in_toto.gpg.constants.SHA1, in_toto.gpg.constants.SHA256,
in_toto.gpg.constants.SHA512}))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going to suggest something similar. I'm fine either way though.

PACKET_TYPE_SIGNATURE = 0x02
PACKET_TYPE_PRIMARY_KEY = 0x06
PACKET_TYPE_USER_ID = 0x0D
PACKET_TYPE_USER_ATTR = 0x11
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming we need these two to concatenate them and verify the key-binding sig? If so, then ignore this. I just wanted to bring this up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean User ID and User Attribute? Yes, well no, not key-binding sigs but rather self-certs (0x10 - 0x13). See _assign_certified_key_info() for more details.

@lukpueh
Copy link
Member Author

lukpueh commented Mar 5, 2019

Thanks for your feedback, @SantiagoTorres and @aaaaalbert! I'll address your comments and add tests soon.

Replace if cascade with dict/zip niceness. Kudos to @aaaaalbert
and @SantiagoTorres.
Replace except generic Exception in parsing loop with actually
expected exceptions.
Use binary notation for bitmasks for better readability.
Revert adding optional hash algorithm argument in
gpg.functions.gpg_verify_signature. This function is meant to
be used for binary type signatures, for which we only accept
sha265.

For other signature types (self sigs), where we accept other
hash algos too, we use a lower level gpg_verify_signature
function.
Add test to trigger KeyNotFoundError
Also store the header length in raw gpg pubkey bundles for testing
purposes.
Add test that asserts the presence of packets expected returned
from `parse_pubkey_bundle` for specific test key). See
gpg --homedir tests/gpg_keyrings/rsa/ --export 9EA70BD13D883381 | \
   gpg --list-packets
Move signed content construction code out of try/except block.
If that codes fails something must have gone wrong earlier and
we should abort.

This commit also adds TODO notes to revise exception taxonomy.
Use real gpg key data and modify it to trigger errors for testing.
Use real gpg key data and modify it to trigger errors for testing.
Use manually crafted signature data to trigger errors for testing.
This differs across test environments.
@lukpueh
Copy link
Member Author

lukpueh commented Mar 21, 2019

36085ad to 9a6141f are related to testing, including some minor modifications in the actual implementation to make testing easier. Please review!

Copy link
Member

@SantiagoTorres SantiagoTorres left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoa, this took me hours to review.

On the surface I could only find very minor stylistic stuff, nice job!

I'm approving this, and probably doing the style changes myself. I want to test backwards-compatibilty with old metadata, as I'm sure our integrators would appreciate.

@@ -26,7 +25,7 @@
import in_toto.gpg.util
import in_toto.gpg.exceptions
import in_toto.gpg.formats

import in_toto.gpg.constants
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that this removed one whitespace (we should have 2 here right?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you talking about blank lines after imports?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See bac9726

@@ -26,6 +26,7 @@

import in_toto.gpg.exceptions
import in_toto.process
import in_toto.gpg.constants

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same whitespace issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See bac9726

# If we have a full keyid it is safe to remove the short keyid to save space
del signature["short_keyid"]
# It is okay now to remove the optional short keyid to save space
signature.pop("short_keyid", None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious: what's the rationale for this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

del signature["short_keyid"] might raise a KeyError. Since short_keyid is optional, I'm using pop to be safe.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right. Thanks for clarifying!

# and it must be ordered because each signature packet belongs to the
# most recently parsed packet of a type.
elif packet_type in [PACKET_TYPE_USER_ID, PACKET_TYPE_USER_ATTR,
PACKET_TYPE_SUB_KEY]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, can we use a set instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure will change

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done a4a7e58

# The remaining relevant packets are signatures, required to bind subkeys
# to the primary key, or to gather additional information about the
# primary key, e.g. expiration date.
# A signatures corresponds to the most recently parsed packet of a type,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/signatures/signature/g

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done 77d7fb1


except Exception as e:
log.info(e)
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we want to remove this except block and just let it blow up instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am doing this in a more recent commit.


except Exception as e:
log.info(e)
continue
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we shouldn't because parse_signature_packet might fail legitimately, because we want to continue if parse_signature_packet fails because we're giving it e.g. a revocation signature, which might happen. In a more recent commit I added a TODO: Revise exception taxonomy

@@ -183,6 +183,8 @@ def gpg_verify_signature(signature_object, pubkey_info, content,
hash_algorithm_id:
one of SHA1, SHA256, SHA512 (see in_toto.gpg.constants) used to
verify the signature
NOTE: Overrides any hash algorithm specification in "pubkey_info"'s
"hashes" or "method" fields.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this note sounds more dramatic than it really is (as far as I understand hash_algorithm_id is populated with pubkey_info["hashes"] or "method" fields when not used directly?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Currently SHA256 is passed explicitly in the interface function gpg.functions.gpg_verify_signature to the corresponding dsa or rsa verification function. While it aligns with the possible values for "hashes" or "method", assigned in gpg.common.parse_pubkey_payload, it is not populated from those fields.

But even if it were populated from those fields (IMO a good idea), we still want to be able to use a public key that we restrict to sha256 for binary signatures (i.e. key_info["hashes"] = "pgp+SHA2"), to verify self-signatures with other hash algos ... hence the support for an override and hence the comment. :)

We can of course re-think the fields in the public key dict and how we assign them. But I wouldn't do it as part of this PR.

@@ -329,14 +672,18 @@ def parse_signature_packet(data):
if subpacket_type == PARTIAL_KEYID_SUBPACKET:
short_keyid = binascii.hexlify(subpacket_data).decode("ascii")

# info["subpackets"].append((
# subpacket_type,
# binascii.hexlify(subpacket_data).decode("ascii")))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This somehow sneaked through. Could you please remove it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left it for michael, but I can remove it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a TODO comment 60555c7 and will remove or un-comment it with #245

@SantiagoTorres
Copy link
Member

LGTM.

Thanks @lukpueh !

@SantiagoTorres SantiagoTorres merged commit eadfce5 into in-toto:develop Mar 22, 2019
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Aug 8, 2019
Change parse_pubkey_bundle to return the parsed raw pubkey bundle
and let the caller, i.e. get_pubkey_bundle, call functions for
further processing, i.e. _assign_certified_key_info and
_get_verified_subkeys, to eventually return the processed
pubkey bundle, i.e. master_public_key (enriched with certified
key info and optional verified subkeys).

This follows suggestions made by @aaaaalbert in
in-toto/in-toto#257 (comment)
and also makes testing easier.
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Aug 8, 2019
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Aug 14, 2019
Change parse_pubkey_bundle to return the parsed raw pubkey bundle
and let the caller, i.e. get_pubkey_bundle, call functions for
further processing, i.e. _assign_certified_key_info and
_get_verified_subkeys, to eventually return the processed
pubkey bundle, i.e. master_public_key (enriched with certified
key info and optional verified subkeys).

This follows suggestions made by @aaaaalbert in
in-toto/in-toto#257 (comment)
and also makes testing easier.
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Aug 14, 2019
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Sep 4, 2019
Change parse_pubkey_bundle to return the parsed raw pubkey bundle
and let the caller, i.e. get_pubkey_bundle, call functions for
further processing, i.e. _assign_certified_key_info and
_get_verified_subkeys, to eventually return the processed
pubkey bundle, i.e. master_public_key (enriched with certified
key info and optional verified subkeys).

This follows suggestions made by @aaaaalbert in
in-toto/in-toto#257 (comment)
and also makes testing easier.
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Sep 4, 2019
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Sep 4, 2019
Change parse_pubkey_bundle to return the parsed raw pubkey bundle
and let the caller, i.e. get_pubkey_bundle, call functions for
further processing, i.e. _assign_certified_key_info and
_get_verified_subkeys, to eventually return the processed
pubkey bundle, i.e. master_public_key (enriched with certified
key info and optional verified subkeys).

This follows suggestions made by @aaaaalbert in
in-toto/in-toto#257 (comment)
and also makes testing easier.
lukpueh added a commit to lukpueh/securesystemslib that referenced this pull request Sep 4, 2019
tanishqjasoria pushed a commit to tanishqjasoria/securesystemslib that referenced this pull request Jan 30, 2020
Change parse_pubkey_bundle to return the parsed raw pubkey bundle
and let the caller, i.e. get_pubkey_bundle, call functions for
further processing, i.e. _assign_certified_key_info and
_get_verified_subkeys, to eventually return the processed
pubkey bundle, i.e. master_public_key (enriched with certified
key info and optional verified subkeys).

This follows suggestions made by @aaaaalbert in
in-toto/in-toto#257 (comment)
and also makes testing easier.
tanishqjasoria pushed a commit to tanishqjasoria/securesystemslib that referenced this pull request Jan 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants