Description of issue or feature request:
This is not a bug (although it could be on memory limited client device), but a performance improvement:
After PR #1202 there is only one place where the updater loads the whole target file in memory. We should avoid doing that as targets could be very large and memory could be limited.
Current behavior:
_check_hashes() does this:
digest_object = securesystemslib.hash.digest(algorithm)
digest_object.update(file_object.read())
computed_hash = digest_object.hexdigest()
Expected behavior:
something like handwaves
digest_object = securesystemslib.hash.digest(algorithm)
while True:
chunk = file_object.read(CHUNK_SIZE)
if not chunk:
break
digest_object.update(chunk)
computed_hash = digest_object.hexdigest()
or even more simply just let SSLib handle this with its default chunk size:
digest_object = securesystemslib.hash.digest_fileobject(file_object, algorithm)
computed_hash = digest_object.hexdigest()
Description of issue or feature request:
This is not a bug (although it could be on memory limited client device), but a performance improvement:
After PR #1202 there is only one place where the updater loads the whole target file in memory. We should avoid doing that as targets could be very large and memory could be limited.
Current behavior:
_check_hashes()does this:Expected behavior:
something like handwaves
or even more simply just let SSLib handle this with its default chunk size: