Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also .

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also .
...
Choose a head branch
1.2.y
1.3.y
1.4.y
1.5.y
attic/cmake
bugfix/acquire-error
bugfix/acquire-priority-queue
bugfix/apt-key-config-many-fd
bugfix/apt-key-config
bugfix/apt-key-config2
bugfix/big-lock
bugfix/clog
bugfix/cmake
bugfix/cross-arch-candidate
bugfix/fix-or-in-build-dep-parsing
bugfix/gcc
bugfix/gpg-versions
bugfix/happy-eyeballs
bugfix/https-proxy-environ
bugfix/internal-seek
bugfix/lp-1653094-https-quote
bugfix/portable-docbook
bugfix/proxy-popen
bugfix/run-update-scripts-if-not-all-failed
bugfix/sane-quoting
bugfix/sha1-deprecated
bugfix/sigint
bugfix/translate-common-manpage-stuff
bugfix/verify-trust-chain
bugfix/versionhash-overflow
bugfix/748936-correct-arch-patterns
cmake-no-globbing
cmake-prepare
coverty_scan
debian/experimental-no-abi-break
debian/experimental
debian/jessie
debian/sid-gcc5
debian/sid
debian/wheezy
feature/apt-cache-policy-show-current-state
feature/blake2b
feature/configurable-hash-trust
feature/extended-cache
feature/force-compressor
feature/freeze-config-optiom
feature/http-https
feature/https-http-part2
feature/https-proxy
feature/move-methods
feature/noinstall-notautomic
feature/rpm
feature/seccomp
for-1.2/apt-key
for-1.2/locale
for-1.2/1.4
for-1.6/gcov-error-file
jessie-backports
lp1615482
lp1686470
master-pu
master
misc/error-message-rework
misc/forward-string-view
misc/include-cleanup
misc/increase-manual-scores
misc/rework-filefd-lzma
misc/select-to-poll
misc/thread-local
misc/unused
misc/wait-online
performance/cachegen
performance/crc16-sliced
performance/hex2num
performance/no-packagetable
performance/no-useless-buffering
performance/perfect-hash
performance/perfect-hash2
performance/random
performance/store-string-size
performance/tagfile
performance/trie
portability/fink
portability/freebsd
portability/macos
pu/cmake-fixes
pu/compressed-indexes
pu/dpkg-1.19
pu/drop-store-symlinks
pu/happy-eyeballs
pu/happy-eyeballs2a
pu/happy-eyeballs2
pu/method-socket
pu/mmap-no-executable
pu/ninja
pu/proxy-auto-detect
pu/rules-requires-root-no
pu/seccomp-sigaction
pu/transient-error-fixes
pu/transitional-transport-https
refactor/gpgv
reformat-test
shippable
strip-zero-epochs-from-hash
travis-docker
travis-llvm
travis-test2
travis-test3
ubuntu/master
ubuntu/trusty
ubuntu/zesty
Nothing to show
Commits on Aug 31, 2016
methods/ftp: Cope with weird PASV responses
wu-ftpd sends the response without parens, whereas we expect
them.

I did not test the patch, but it should work. I added another
return true if Pos is still npos after the second find to make
sure we don't add npos to the string.

Thanks: Lukasz Stelmach for the initial patch
Closes: #420940
(cherry picked from commit 25a6941)
Use the ConditionACPower feature of systemd in the apt-daily service
.. instead of hardcoding the functionnality in the apt.systemd.daily
script.

Also make the compatibility cron job provide the same functionnality
for systems that do not use systemd.

Closes: #827930
(cherry picked from commit 51d659e)
close server if parsing of header field failed
Seen in #828011 if we fail to parse a header field like Last-Modified we
end up interpreting the data as response header for coming requests in
case we don't rotate to a new server in DNS rotation.

(cherry picked from commit cc0a4c8)
Fix buffer overflow in debListParser::VersionHash()
If a package file is formatted in a way that that no space
follows a deprecated "<", we would reformat it to "<=" and
increase the length of the output by 1, which can break.

Under normal circumstances with "<=" this should not be an
issue.

Closes: #828812
(cherry picked from commit b6e9756)
cache: Bump minor version to 6
Needed for the previous change

(cherry picked from commit 33aa275)
don't do atomic overrides with failed files
We deploy atomic renames for some files, but these renames also happen
if something about the file failed which isn't really the point of the
exercise…

Closes: 828908
(cherry picked from commit fc5db01)
if reading of autobit state failed, let write fail
If we can't read the old file we can't just move forward as that would
discard potentially discard old data (especially other fields). We let
it fail only after we are done writing the new file so a user has the
chance to look into and merge the new data (which is otherwise
discarded).

(cherry picked from commit 5209318)
write auto-bits before calling dpkg & again after if needed
Writing first means that even in the event of a power-failure the
autobit is saved for future processing instead of "forgotten" so that
the package is treated as manually installed.

In some cases we have to re-run the writing after dpkg is done through
as dpkg can let packages disappear and in such cases apt will move
autobits around (or in that case non-autobits) which we need to store.

(cherry picked from commit 309f497)
factor out Pkg/DepIterator prettyprinters into own header
The old prettyprinters have only access to the struct they pretty print,
which isn't enough usually as we want to know for a package also a bit
of state information like which version is the candidate.

We therefore need to pull the DepCache into context and hence use a
temporary struct which is printed instead of the iterator itself.

(cherry picked from commit 8457332)
more explicit MarkRequired algorithm code
Piling everything in a single if statement always made my head wobble,
but it hasn't even a benefit as the most common case of a package which
isn't installed passes all of the old if and lands in the non-existent
else-part of the inner if. So beside a subjective cleanup of what goes
on this implementation should also be a bit faster.

No change in behavior should be present.

Gbp-Dch: Ignore
(cherry picked from commit 769e9f3)
more explicit MarkRequired algorithm code (part 2)
As the previous commit, this shouldn't change behavior at all, but
beside being more explicit and perhaps faster its also considerably
shorter (granted, mostly by if0-block elimination).

Gbp-Dch: Ignore
(cherry picked from commit 5a3339d)
protect only the latest same-source providers from autoremove
Traditionally all providers are protected providing something as apt
can't know which of them is actually really providing the functionality
for the user ensuring that we don't propose the removal of used stuff,
but that is of course also keeping stuff around which could be removed.

That can cause the collection of multiple old providers until the
provided package is itself no longer needed (e.g. out-of-tree kernel
modules). We combat this by marking providers only from the newest
source package version so that old providers built by older versions of
the same source package can be garbage collected.

(cherry picked from commit a0ed43f)
reinstalling local deb file is no downgrade
If we have a (e.g. locally built) deb file installed and do try to
install it again apt complained about this being a downgrade, but it
wasn't as it is the very same version… it was just confused into not
merging the versions together which looks like a downgrade then.

The same size assumption is usually good, but given that volatile files
are parsed last (even after the status file) the base assumption no
longer holds, but is easy to adept without actually changing anything in
practice.

(cherry picked from commit e7edb2f)
do not treat same-version local debs as downgrade
As the volatile sources are parsed last they were sorted behind the
dpkg/status file and hence are treated as a downgrade, which isn't
really what you want to happen as from a user POV its an upgrade.

(cherry picked from commit cb9ac09)
indextargets: Check that cache could be built before using it
This caused a crash because the cache was a nullptr.

Closes: #829651
(cherry picked from commit 8823972)
Make the test case executable
Gbp-Dch: ignore
(cherry picked from commit 2a90aa7)
avoid 416 response teardown binding to null pointer
methods/http.cc:640:13: runtime error: reference binding to null pointer
of type 'struct FileFd'

This reference is never used in the cases it has a nullptr, so the
practical difference is non-existent, but its a bug still.

Reported-By: gcc -fsanitize=undefined
(cherry picked from commit 4460551)
use the right key for compressor configuration dump
The generated dump output is incorrect in sofar as it uses the name as
the key for this compressor, but they don't need to be equal as is the
case if you force some of the inbuilt ones to be disabled as our testing
framework does it at times.

This is hidden from changelog as nobody will actually notice while
describing it in a few words make it sound like an important change…

Git-Dch: Ignore
(cherry picked from commit 52bafea)
don't change owner/perms/times through file:// symlinks
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.

Reported-By: Mattia Rizzolo on IRC
(cherry picked from commit 3465138)
report all instead of first error up the acquire chain
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.

(cherry picked from commit b50dfa6)
keep trying with next if connection to a SRV host failed
Instead of only trying the first host we get via SRV, we try them all as
we are supposed to and if that isn't working we try to connect to the
host itself as if we hadn't seen any SRV records. This was already the
intend of the old code, but it failed to hide earlier problems for the
next call, which would unconditionally fail then resulting in an all
around failure to connect. With proper stacking we can also keep the
error messages of each call around (and in the order tried) so if the
entire connection fails we can report all the things we have tried while
we discard the entire stack if something works out in the end.

(cherry picked from commit 3af3ac2)
Andrew Patterson + julian-klode
Add kernels with "+" in the package name to APT::NeverAutoRemove
Escape "+" in kernel package names when generating APT::NeverAutoRemove
list so it is not treated as a regular expression meta-character.

[Changed by David Kalnischkies: let test actually test the change]

Closes: #830159
(cherry picked from commit 130176b)
Turkish program translation update
Closes: 832039
(cherry picked from commit a913e64)
call flush on the wrapped writebuffered FileFd
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…

(cherry picked from commit 8ca481e)
verify hash of input file in rred
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.

Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).

(cherry picked from commit 6e71ec6)
use proper warning for automatic pipeline disable
Also fixes message itself to mention the correct option name as noticed
in #832113.

(cherry picked from commit b9c2021)
rred: truncate result file before writing to it
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.

The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.

This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.

Closes: #831762
(cherry picked from commit 0e071df)
Commits on Oct 05, 2016
(error) va_list 'args' was opened but not closed by va_end()
Reported-By: cppcheck
Gbp-Dch: Ignore
(cherry picked from commit 196d590)
if the FileFd failed already following calls should fail, too
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.

(cherry picked from commit 02c3807)
gpgv: Unlink the correct temp file in error case
Previously, when data could be created and sig not, we would unlink
sig, not data (and vice versa).

(cherry picked from commit d0d06f4)
pass --force-remove-essential to dpkg only if needed
APT (usually) knows which package is essential or not, so we can avoid
passing this force flag to dpkg unconditionally  if the user hasn't
chosen a non-default essential handling obscuring the information.

(cherry picked from commit d3930f8)
allow user@host (aka: no password) in URI parsing
If the URI had no password the username was ignored

(cherry picked from commit a1f3ac8)
fileutl: empty file support: Avoid fstat() on -1 fd and check result
When checking if a file is empty, we forget to check that
fstat() actually worked.

(cherry picked from commit 15fe8e6)
drop incorrect const attribute from DirectoryExists
Since its existence in 2010 DirectoryExists was always marked with this
attribute, but for no real reason. Arguably a check for the existence of
the file is not modifying global state, so theoretically this shouldn't
be a problem. It is wrong from a logical point of view through as
between two calls the directory could be created so the promise we made
to the compiler that it could remove the second call would be wrong, so
API wise it is wrong.

It's a bit mysterious that this is only observeable on ppc64el and can be
fixed by reordering code ever so slightly, but in the end its more our
fault for adding this attribute than the compilers fault for doing
something silly based on the attribute.

LP: 1473674
(cherry picked from commit 9445fa6)
http(s): allow empty values for header fields
It seems completely pointless from a server-POV to sent empty header
fields, so most of them don't do it (simply proven by this limitation
existing since day one) – but it is technically allowed by the RFC as
the surounding whitespaces are optional and Github seems to like sending
"X-Geo-Block-List:\r\n" since recently (bug reports in other http
clients indicate July) at least sometimes as the reporter claims to have
seen it on https only even through it can happen with both.

Closes: 834048
(cherry picked from commit 148c049)
don't try pipelining if server closes connections
If a server closes a connection after sending us a file that tends to
mean that its a type of server who always closes the connection – it is
therefore relatively pointless to try pipelining with it even if it
isn't a problem by itself: apt is just restarting the pipeline each
time after it got served one file and the connection is closed.

The problem starts if one or more proxies are between the server and apt
and they disagree about how the connection should be as in the
bugreporters case where the responses apt gets contain both Keep-Alive
and Proxy-Connection headers (which apt both ignores) indicating a
proxy is trying to keep a connection open while the response also
contains "Connection: close" indicating the opposite which apt
understands and respects as it is required to do.

We avoid stepping into this abyss by not performing pipelining anymore
if we got a respond with the indication to close connection if the
response was otherwise a success – error messages are sent by some
servers via this method as their pages tend to be created dynamically
and hence their size isn't known a priori to them.

Closes: #832113
(cherry picked from commit 9714d52)
set the correct item FileSize in by-hash case
In af81ab9 we implement by-hash as a
special compression type, which breaks this filesize setting as the code
is looking for a foobar.by-hash file then. Dealing this slightly gets
us the intended value. Note that this has no direct effect as this value
will be set in other ways, too, and could only effect progress reporting.

Gbp-Dch: Ignore
(cherry picked from commit 3084ef2)
Ignore SIGINT and SIGQUIT for Pre-Install hooks
Instead of erroring out when receiving a SIGINT, let the
child deal with it - we'll error out anyway if the child
exits with an error or due to the signal. Also ignore
SIGQUIT, as system() ignores it.

This basically fixes Bug #832593, but: we are running
the hooks via sh -c. Some shells exit with a signal
error even if the command they are executing catches
the signal and exits successfully. So far, this has
been noticed on dash, which unfortunately, is our
default shell.

Example:
$ cat trap.sh
trap 'echo int' INT; sleep 10; exit 0
$ if dash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
FAIL: 130
$ if mksh -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
$ if bash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0

(cherry picked from commit a6ae3d3)
install-progress: Call the real ::fork() in our fork() method
We basically called ourselves before, creating an endless loop.

Reported-By: clang
(cherry picked from commit d651c4c)
Accept --autoremove as alias for --auto-remove
I probably missed that when I did the usability work. But better
late than never.

(cherry picked from commit 75d238b)
apt-inst: debfile: Pass comp. Name to ExtractTar, not Binary
In the old days, apt-inst used to use binaries, but now it
uses the built-in support and matches using Name, and not a
Binary.

(cherry picked from commit 8a36289)
changelog: Respect Dir setting for local changelog getting
This fixes issues with chroots, but the goal here was to get
the test suite working on systems without dpkg.

(cherry picked from commit 2ed62ba)
don't loop on pinning pkgs from absolute debs by regex
An absolute filename for a *.deb file starts with a /. A package with
the name of the file is inserted in the cache which is provided by the
"real" package for internal reasons. The pinning code detects a regex
based wildcard by having the regex start with /. That is no problem
as a / can not be included in a package name… expect that our virtual
filename package can and does.

We fix this two ways actually: First, a regex is only being considered a
regex if it also ends with / (we don't support flags). That stops our
problem with the virtual filename packages already, but to be sure we
also do not enter the loop if matcher and package name are equal.

It has to be noted that the creation of pins for virtual packages like
the here effected filename packages is pointless as only versions can be
pinned, but checking that a package is really purely virtual is too
costly compared to just creating an unused pin.

Closes: 835818
(cherry picked from commit e950b7e)
Fix segfault and out-of-bounds read in Binary fields
If a Binary field contains one or more spaces before a comma, the
code produced a segmentation fault, as it accidentally set a pointer
to 0 instead of the value of the pointer.

If the comma is at the beginning of the field, the code would
create a binStartNext that points one element before the start
of the string, which is undefined behavior.

We also need to check that we do not exit the string during the
replacement of spaces before commas: A string of the form " ,"
would normally exit the boundary of the Buffer:

	binStartNext = offset 1 ','
	binEnd = offset 0	' '
	isspace_ascii(*binEnd) = true => --binEnd
				      => binEnd = - 1

We get rid of the problem by only allowing spaces to be eliminated
if they are not the first character of the buffer:

        binStartNext = offset 1 ','
        binEnd = offset 0       ' '
        binEnd > buffer = false, isspace_ascii(*binEnd) = true
		 => exit loop
                => binEnd remains 0

(cherry picked from commit ce6cd75)
test/integration/test-srcrecord: Make executable
I actually tried to amend the previous commit, but apparently
I forgot to add the file mode change.

Gbp-Dch: ignore
(cherry picked from commit 832f95f)
zh_CN.po: update simplified chinese translation
[jak@debian.org: This merges the state of 1.3_rc3]
Merge translations from 1.3~rc3
This fixes a few fuzzy strings.
TagFile: Fix off-by-one errors in comment stripping
Adding 1 to the value of d->End - current makes restLength one byte
too long: If we pass memchr(current, ..., restLength) has thus
undefined behavior.

Also, reading the value of current has undefined behavior if
current >= d->End, not only for current > d->End:

Consider a string of length 1, that is d->End = d->Current + 1.
We can only read at d->Current + 0, but d->Current + 1 is beyond
the end of the string.

This probably caused several inexplicable build failures on hurd-i386
in the past, and just now caused a build failure on Ubuntu's amd64
builder.

Reported-By: valgrind
(cherry picked from commit 923c592)
Base256ToNum: Fix uninitialized value
If the inner Base256ToNum() returned false, it did not set
Num to a new value, causing it to be uninitialized, and thus
might have caused the function to exit despite a good result.

Also document why the Res = Num, if (Res != Num) magic is done.

Reported-By: valgrind
(cherry picked from commit cf7503d)
try not to call memcpy with length 0 in hash calculations
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".

Reported-By: gcc -fsanitize=undefined
(cherry picked from commit 644478e)
abort connection on '.' target replies in SRV
Commit 3af3ac2 (released in 1.3~pre1)
implements proper fallback for SRV, but that works actually too good
as the RFC defines that such an SRV record should indicate that the
server doesn't provide this service and apt should respect this.

The solution is hence to fail again as requested even if that isn't what
the user (and perhaps even the server admins) wanted. At least we will
print a message now explicitly mentioning SRV to point people in the
right direction.

Reported-In: https://bugs.kali.org/view.php?id=3525
Reported-By: Raphaël Hertzog
(cherry picked from commit 99fdd80)
VersionHash: Do not skip too long dependency lines
If the dependency line does not contain spaces in the repository
but does in the dpkg status file (because dpkg normalized the
dependency list), the dpkg line might be longer than the line
in the repository. If it now happens to be longer than 1024
characters, it would be skipped, causing the hashes to be
out of date.

Note that we have to bump the minor cache version again as
this changes the format slightly, and we might get mismatches
with an older src cache otherwise.

Fixes Debian/apt#23

(cherry picked from commit 708e2f1)
Do not read stderr from proxy autodetection scripts
This fixes a regression introduced in
  commit 8f858d5

  don't leak FD in AutoProxyDetect command return parsing

which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().

Reported-By: Tim Small <tim@seoss.co.uk>
(cherry picked from commit 0ecceb5)
Commits on Oct 31, 2016
Commits on Nov 14, 2016
avoid changing the global LC_TIME for Release writing
Using C++ here avoids calling setlocale here which never really was that
ideal, but needed to avoid locale specific weekday/month names.

(cherry picked from commit e0b01a8)
accept only the expected UTC timezones in date parsing
HTTP/1.1 hardcodes GMT (RFC 7231 §7.1.1.1) and what is good enough for the
internet must be good enough for us™ as we reuse the implementation
internally to parse (most) dates we encounter in various places like the
Release files with their Date and Valid-Until header fields.

Implementing a fully timezone aware parser just feels too hard for no
effective benefit as it would take 5+ years (= until LTS's are out of
fashion) until a repository could use non-UTC dates and expect it to
work. Not counting non-apt implementations which might or might not
only want to encounter UTC here as well.

As a bonus, this eliminates the use of an instance of setlocale in
libapt.

Closes: 819697
(cherry picked from commit 9febc2b)
avoid std::get_time usage to sidestep libstdc++6 bug
As reported upstream in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71556
the implementation of std::get_time is currently not as accepting as
strptime is, especially in how hours should be formatted.

Just reverting 9febc2b would be
possible, but then we would reopen the problems fixed by it, so instead
I opted here for a rewrite of the parsing logic which makes this method
a lot longer, but at least it provides the same benefits as the rewrite
in std::get_time was intended to give us and decouples us from the fix
of the issue in the standard library implementation of GCC.

LP: 1593583
(cherry picked from commit 1d742e0)
imbue datetime parsing with C.UTF-8 locale
Rewritten in 9febc2b for c++ locales
usage and rewritten again in 1d742e0 to
avoid a currently present stdlibc++6 bug in the std::get_time
implementation. The later implementation uses still stringstreams for
parsing, but forgot to explicitly reset the locale to something sane
(for parsing english dates that is), so date and especially the parsing
of a number is depending on the locale. Turns out, the French (among
others) format their numbers with space as thousand separator so for
some reason the stdlibc++6 thinks its a good idea to interpret the
entire datetime string as a single number instead of realizing that in
"25 Jun …" the later parts can't reasonably be part of that number even
through there are spaces there…

Workaround is hence: LC_NUMERIC=C.UTF-8

Closes: 828011
(cherry picked from commit 3bdff17)
prevent C++ locale number formatting in text APIs (try 2)
Followup of b58e2c7.
Still a regression of sorts of 8b79c94.

Closes: 832044
(cherry picked from commit 7303e11)
prevent C++ locale number formatting in text APIs (try 3)
This time it is the formatting of floating numbers in progress
reporting with a radix charater potentially not being dot.

Followup of 7303e11. Regression of
b58e2c7 in so far as it exchanging
very effected with slightly less effected code.

LP: 1611010
(cherry picked from commit 0919f1d)
imbue .diff/Index parsing with C.UTF-8 as well
In 3bdff17 we did it for the datetime
parsing, but we use the same style in the parsing for pdiff (where the
size of the file is in the middle of the three fields) so imbueing here
as well is a good idea.

(cherry picked from commit 1136a70)
Use C locale instead of C.UTF-8 for protocol strings
The C.UTF-8 locale is not portable, so we need to use C, otherwise
we crash on other systems. We can use std::locale::classic() for
that, which might also be a bit cheaper than using locale("C").

(cherry picked from commit 0fb16c3)
travis: Pull in c++ standard library and g++ 5 from wily
The one in trusty does not support std::put_time(), causing the
compile to fail. This commit is specific to the 1.2 branch, as
newer branches already pull this in automatically.

Gbp-Dch: ignore
Add shippable.yml for CI on Shippable
This uses the current Ubuntu 16.04 for testing, but it only runs
one run, presumably as root.

(adapted from commit bb315d0)
Revert "if the FileFd failed already following calls should fail, too"
This reverts commit 1b63558.

This commit caused a regression in the unit tests: The error was
propagated to Close(), where we expected it to return true.
Commits on Nov 15, 2016
Commits on Nov 23, 2016
apt-key: warn instead of fail on unreadable keyrings
apt-key has inconsistent behaviour if it can't read a keyring file:
Commands like 'list' skipped silently over such keyrings while 'verify'
failed hard resulting in apt to report cconfusing gpg errors (#834973).

As a first step we teach apt-key to be more consistent here skipping in
all commands over unreadable keyrings, but issuing a warning in the
process, which is as usual for apt commands displayed at the end of the
run.

(cherry picked from commit 105503b)
(removed the buffering of warnings in aptwarnings.log, as we do not
 have a cleanup function where we can cat it)

LP: #1642386
test-releasefile-verification: installaptold: Clean up before run
This is needed to make it possible to use installaptold multiple
times in a test case.

(originally part of commit 46e00c9)
show apt-key warnings in apt update
In 105503b we got a warning implemented
for unreadable files which greatly improves the behavior of apt update
already as everything will work as long as we don't need the keys
included in these files. The behavior if they are needed is still
strange through as update will fail claiming missing keys and a manual
test (which the user will likely perform as root) will be successful.

Passing the new warning generated by apt-key through to apt is a bit
strange from an interface point of view, but basically duplicating the
warning code in multiple places doesn't feel right either. That means we
have no translation for the message through as apt-key has no i18n yet.

It also means that if the user has a bunch of sources each of them will
generate a warning for each unreadable file which could result in quite
a few duplicated warnings, but "too many" is better than none.

Closes: 834973
(cherry picked from commit 29c5909)
Commits on Dec 08, 2016
SECURITY UPDATE: gpgv: Check for errors when splitting files (CVE-201…
…6-1252)

This fixes a security issue where signatures of the
InRelease files could be circumvented in a man-in-the-middle
attack, giving attackers the ability to serve any packages
they want to a system, in turn giving them root access.

It turns out that getline() may not only return EINVAL
as stated in the documentation - it might also return
in case of an error when allocating memory.

This fix not only adds a check that reading worked
correctly, it also implicitly checks that all writes
worked by reporting any other error that occurred inside
the loop and was logged by apt.

Affected: >= 0.9.8
Reported-By: Jann Horn <jannh@google.com>
Thanks: Jann Horn, Google Project Zero for reporting the issue
LP: #1647467
(cherry picked from commit 51be550)
(cherry picked from commit 4ef9e08)
gpgv: Flush the files before checking for errors
This is a follow up to the previous issue where we did not check
if getline() returned -1 due to an end of file or due to an error
like memory allocation, treating both as end of file.

Here we ensure that we also handle buffered writes correctly by
flushing the files before checking for any errors in our error
stack.

Buffered writes themselves were introduced in 1.1.9, but the
function was never called with a buffered file from inside
apt until commit 46c4043
which was first released with apt 1.2.10. The function is
public, though, so fixing this is a good idea anyway.

Affected: >= 1.1.9
(cherry picked from commit 6212ee8)
(cherry picked from commit e115da4)
Commits on Jan 17, 2017
https: Quote path in URL before passing it to curl
Curl requires URLs to be urlencoded. We are however giving it
undecoded URLs. This causes it go completely nuts if there is
a space in the URI, producing requests like:

    GET /a file HTTP/1.1

which the servers then interpret as a GET request for "/a" with
HTTP version "file" or some other non-sense.

This works around the issue by encoding the path component of
the URL. I'm not sure if we should encode other parts of the URL
as well, this one seems to do the trick for the actual issue at
hand.

A more correct fix is to avoid the dequoting and (re-)quoting
of URLs when a redirect occurs / a new request is sent. That's
been on the radar for probably a year or two now, but nobody
bothered implementing that yet.

LP: #1651923
(cherry picked from commit 994515e)
(cherry picked from commit 438b1d7)
test: use downloadfile instead of apthelper download-file
This prevents CI failures from happening in 1.3 and 1.2 and
might actually be more complete.

Gbp-Dch: ignore
(cherry picked from commit 803dabd)
(cherry picked from commit f55bd82)
Commits on Feb 22, 2017
don't install new deps of candidates for kept back pkgs
In effect this is an extension of the 6 years old commit
a8dfff9 which uses the autoremover to
remove packages again from the solution which are no longer needed to be
there. Commonly these are dependencies of packages we end up not
installed due to problem resolver decisions. Slightly less common is
the situation we deal with here: a package which we wanted to upgrade
sporting a new dependency, but ended up holding back.

The problem is that all versions of an installed reverse dependencies can
bring back a "garbage" package – we need to do this as there is
nothing inherently wrong in having garbage packages installed or upgrade
them, which itself would have garbage dependencies, so just blindly
killing all new garbage packages would prevent the upgrade (and actually
generate errors). What we should be doing is looking only at the version
we will have on the system, disregarding all old/new reverse dependencies.

Reported-By: Stuart Prescott (themill) on IRC
(cherry picked from commit 9521717)
(cherry picked from commit 72ea044)
(modified for 1.2.y by adjusting sections in test case)
keep Release.gpg on untrusted to trusted IMS-Hit
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.

This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.

Closes: 838779
(cherry picked from commit 84eec20)
LP: #1657440
(cherry picked from commit 5605c98)
reset HOME, USER(NAME), TMPDIR & SHELL in DropPrivileges
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.

Closes: 842877
(cherry picked from commit 34b491e)
(cherry picked from commit cc59190)
add TMP/TEMP/TEMPDIR to the TMPDIR DropPrivileges dance
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.

Reported-By: Paul Wise (pabs) on IRC
(cherry picked from commit e2c8c82)
(cherry picked from commit 52067bd)
show output as documented for APT::Periodic::Verbose 2
The documentation of APT::Periodic::Verbose doesn't match the code,
specifically level 2 should apply some things differently to level 1
but does not because it uses `-le 2` instead of `-lt 2` or `-le 1`.

Closes: 845599
(cherry picked from commit 2506878)
(cherry picked from commit c2ce13f)
bash-completion: Only complete understood file paths for install
Previouosly apt's bash completion was such that, given

    $ mkdir xyzzz
    $ touch xyzzy.deb xyzzx.two.deb

you'd get

    $ apt install xyzz<tab>
    xyzzx.two.deb  xyzzz/
    $ apt install /tmp/foo/xyzz<tab>
    xyzzx.two.deb  xyzzz/

this is inconsistent (xyzzx.two.deb is listed but not xyzzy.deb), but
worse than that it offered things that apt would not actually
recognise as candidates for install:

    $ sudo apt install xyzzx.two.deb
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    E: Unable to locate package xyzzx.two.deb
    E: Couldn't find any package by glob 'xyzzx.two.deb'
    E: Couldn't find any package by regex 'xyzzx.two.deb'

With this small (trival, really) change, apt's bash completion will
only offer things apt understands, and won't recquire an aditional
period in the filename to offer it:

    $ apt install xyzz<tab>^C
    $ # (no completions!)
    $ apt install ./xyzz<tab>
    xyzzx.two.deb  xyzzy.deb      xyzzz/
    $ apt install /tmp/foo/xyzz
    xyzzx.two.deb  xyzzy.deb      xyzzz/

fixes #28

LP: #1645815
(cherry picked from commit 6761dae)
(cherry picked from commit 3c49cc2)
Honour Acquire::ForceIPv4/6 in the https transport
(cherry picked from commit 49b91f6)
(cherry picked from commit 4034726)
let {dsc,tar,diff}-only implicitly enable download-only
That was the case already for tar-only and diff-only, but in a more
confusing way and without a message while dsc "worked" before resulting
in a dpkg-source error shortly after as tar/diff files aren't available…

(cherry picked from commit 58ebb30)
(cherry picked from commit ab951bc)
avoid producing invalid options if repo has no host
This can happen e.g. for file: repositories. There is no inherent
problem with setting such values internally, but its bad style,
forbidden in the manpage and could be annoying in the future.

Gbp-Dch: Ignore
(cherry picked from commit 44ecb8c)
(cherry picked from commit fec19de)
use FindB instead of FindI for Debug::pkgAutoRemove
Again no practical difference, but for consistency a boolean option
should really be accessed via a boolean method rather than an int
especially if you happen to try setting the option to "true" …

Gbp-Dch: Ignore
(cherry picked from commit c15ba85)
(cherry picked from commit c0dc264)
don't show update stats if cache generation is disabled
Unlikely that anyone is actually running into this, but if we asked to
not generate a cache and avoid it in the first step we shouldn't create
one implicitly anyway by displaying the statistics.

(cherry picked from commit 33f982b)
(cherry picked from commit 1d017d0)
don't lock dpkg in 'apt-get clean'
We get the archives/lock for clean – that is enough to ensure that other
apt instances aren't interfering (or are being interfered with). We
don't need to block actions involving dpkg.

(cherry picked from commit 22acd32)
(cherry picked from commit e3f9554)
don't lock dpkg in update commands
The update command acquires a lock on lists/, but at the end it will
also require the dpkg/lock while building the binary caches. That seems
rather pointless as we are only reading those files, not causing writing
in them. This can also cause problems if a package installation is
running and a background process (like cron) starts an update: If you
are "lucky" enough the update process will pick the dpkg lock in between
apt calls causing the installation process to fail.

(cherry picked from commit 0d90815)
(cherry picked from commit b234a61)
avoid validate/delete/load race in cache generation
Keeping the Fd of the cache file we have validated around to later load
it into the mmap ensures not only that we load the same file (which
wouldn't really be a problem in practice), but that this file also still
exists and wasn't deleted e.g. by a 'apt clean' call run in parallel.

(cherry picked from commit 06606f0)
(cherry picked from commit 2e5726e)
remove 'old' FAILED files in the next acquire call
If apt renames a file to .FAILED it leaves its namespace and is never
touched again – expect since 1.1~exp4 in which "apt clean" will remove
those files. The usefulness of these files rapidly degrades if you don't
keep the update log itself (together with debug output in the best case)
through and on 99% of all system they will be kept around forever just
to collect dust over time and eat up space.

With this commit an update call will remove all FAILED files of previous
runs, so that the FAILED files you have on disk are always only the ones
related to the last apt run stopping apt from hoarding files.

Closes: 846476
(cherry picked from commit 7ca8349)
(cherry picked from commit c854040)
stop rred from leaking debug messages on recovered errors
rred can fail for a plentory of reasons, but its failure is usually
recoverable (Ign lines) so it shouldn't leak unrequested debug messages
to an observing user.

Closes: #850759
(cherry picked from commit 2984d7a)
(cherry picked from commit d0a345d)
basehttp: Only read Content-Range on 416 and 206 responses
This fixes issues with sourceforge where the redirector includes
such a Content-Range in a 302 redirect. Since we do not really know
what file is meant in a redirect, let's just ignore it for all
responses other than 416 and 206.

Maybe we should also get rid of the other errors, and just ignore
the field in those cases as well?

LP: #1657567
(cherry picked from commit 4759a70)
(cherry picked from commit b5d0e1b)
travis: Run test suites for root and user in separate build jobs
This hopefully cuts down on the test time. Optimally, we'd just have
one build job and parallize, but that requires a tty or something,
probably due to GNU parallel?

Gbp-Dch: ignore
(cherry picked from commit 9b7c71f)
(cherry picked from commit e12dbcb)
COPYING.GPL: Update to recent version (address, LGPL name)
Just copied over from common-licenses. Seems we missed to do
that earlier.

Gbp-Dch: ignore
(cherry picked from commit 84285d1)
(cherry picked from commit e87c862)
Only merge acquire items with the same meta key
Since the introduction of by-hash, two differently named
files might have the same real URL. In our case, the files
icons-64x64.tar.gz and icons-128x128.tar.gz of empty tarballs.

APT would try to merge them and end with weird errors because
it completed the first download and enters the second stage for
decompressing and verifying. After that it would queue a new item
to copy the original file to the location, but that copy item would
be in the wrong stage, causing it to use the hashes for the
decompressed item.

Closes: #838441
(cherry picked from commit 7b78e8b)
(cherry picked from commit d2749c8)
Do not package names representing .dsc/.deb/... files
In the case of build-dep and other commands where a file can be
passed we must make sure not to normalize the path name as that
can have odd side effects, or well, cause the operation to do
nothing.

Test for build-dep-file is adjusted to perform the vcard check
once as "vcard" and once as "VCard", thus testing that this
solves the reported bug.

We inline the std::transform() and optimize it a bit to not
write anything in the common case (package names are defined
to be lowercase, the whole transformation is just for names
that should not exist...) to counter the performance hit of
the added find() call (it's about 0.15% more instructions
than with the existing transform, but we save about 0.67%
in writes...).

Closes: #854794
(cherry picked from commit 85ee403)
(cherry picked from commit 83e6e1a)
Don't use -1 fd and AT_SYMLINK_NOFOLLOW for faccessat()
-1 is not an allowed value for the file descriptor, the only
allowed non-file-descriptor value is AT_FDCWD. So use that
instead.

AT_SYMLINK_NOFOLLOW has a weird semantic: It checks whether
we have the specified access on the symbolic link. It also
is implemented only by glibc on Linux, so it's inherently
non-portable. We should just drop it.

Thanks: James Clarke for debugging these issues
Reported-by: James Clarke <jrtc27@jrtc27.com>
(cherry picked from commit 25f54c9)
(cherry picked from commit 2124249)
Commits on Feb 27, 2017
Commits on Apr 25, 2017
Ignore \.ucf-[a-z]+$ like we do for \.dpkg-[a-z]+$
This gets rid of warnings about .ucf-dist files

Reported-By: Axel Beckert (on IRC)
(cherry picked from commit 5094697)
(cherry picked from commit 0c42bab)
Fix and avoid quoting in CommandLine::AsString
In the intended usecase where this serves as a hack there is no problem
with double/single quotes being present as we write it to a log file
only, but nowadays our calling of apt-key produces a temporary config
file containing this "setting" as well and suddently quoting is
important as the config file syntax is allergic to it.

So the fix is to ignore all quoting whatsoever in the input and just
quote (with singles) the option values with spaces. That gives us 99% of
the time the correct result and the 1% where the quote is an integral
element of the option … doesn't exist – or has bigger problems than a
log file not containing the quote. Same goes for newlines in values.

LP: #1672710
(cherry picked from commit 2ce15bd)
(cherry picked from commit c75620d)
systemd: Rework timing and add After=network-online
The timeout values were so large that the timer could run at any
random time of the day, possibly easily interfering with business
hours, and causing trouble. Reduce them to 30 minutes of random
delay and an accuracy to the default value (1 minute).

Also drop the 18:00 event. People still actively use their device
during that time, and for servers, there might be less attendance
than in the regular 06:00 time slot, so longer time to fix things
if something breaks.

During a boot, the service might be run to catch up with a timer
that would have normally elapsed. Due to no dependencies, it would
have run before the network is online - that's bad. Adding an After
and a Wants fixes that for boots, but still leaves the same issue
for Resume.

LP: #1615482
(cherry picked from commit b4f32b1)
(cherry picked from commit 6267b47)
apt-ftparchive: Support '.ddeb' dbgsym packages
(cherry picked from commit c832379)
(cherry picked from commit 3e26e4e)
Commits on May 05, 2017
Allow the daily script to be run in two phases
This adds an argument to the script which may be update, install,
or empty. In the update cases, downloads are performed. In the
install case, installs are performed. If empty, both are run.

Gbp-Dch: ignore
(cherry picked from commit 007b22e)
(cherry picked from commit d02da9d)
Run unattended-upgrade -d in download part
We want to download the upgrades first, if unattended-upgrades
is configured. We don't want to use the normal dist-upgrade -d
thing for it, though, as unattended-upgrades only upgrades a
subset.

(cherry picked from commit 01e324a)
(cherry picked from commit f1f796a)
apt.systemd.daily: Add locking
Use a lock file to make sure only one instance of the
script is running at the same time.

(cherry picked from commit ea49b66)
(cherry picked from commit 820b469)
Split apt-daily timer into two
The timer doing downloading runs throughout the day, whereas
automatic upgrade and clean actions only happen in the morning.

The upgrade service and timer have After= ordering requirements
on their non-upgrade counterparts to ensure that upgrading at
boot takes place after downloading.

LP: #1686470
(cherry picked from commit 496313f)
(cherry picked from commit a234cfe)
bash-completion: Fix spelling of autoclean
Closes: #861846
(cherry picked from commit 6ff527b)
(cherry picked from commit 732325f)
Commits on May 19, 2017
apt.systemd.daily: fix error from locking code
Error:

    pkgs that look like they should be upgraded:
    Error in function stop
    Traceback (most recent call last):
      File "/usr/lib/python3/dist-packages/apt/progress/text.py", line 240,
    in stop
        apt_pkg.size_to_str(self.current_cps))).rstrip("\n"))
      File "/usr/lib/python3/dist-packages/apt/progress/text.py", line 51,
    in _write
        self._file.write("\r")
    AttributeError: 'NoneType' object has no attribute 'write'
    fetch.run() result: 0

Caused by:

    LOCKFD=3
    unattended_upgrades $LOCKFD>&-

Unfortunately this code does not work, it is equivalent to

    unattended_upgrades 3 >&-

I.e. it left fd 3 open, but closed stdout!

Closes: #862567
(cherry picked from commit 7b4581c)
(cherry picked from commit 3310f86)
apt.systemd.daily: Drop the LOCKFD variable
Gbp-Dch: ignore
(cherry picked from commit 3819004)
(cherry picked from commit 7e65cbf)
Commits on Jun 19, 2017
Fix parsing of or groups in build-deps with ignored packages
If the last alternative(s) of an Or group is ignored, because it does
not match an architecture list, we would end up keeping the or flag,
effectively making the next AND an OR.

For example, when parsing (on amd64):

    debhelper (>= 9), libnacl-dev [amd64] | libnacl-dev [i386]
 => debhelper (>= 9), libnacl-dev |

Which can cause python-apt to crash.

Even worse:

     debhelper (>= 9), libnacl-dev [amd64] | libnacl-dev [i386], foobar
  => debhelper (>= 9), libnacl-dev [amd64] | foobar

By setting the previous alternatives Or flag to the current Or flag
if the current alternative is ignored, we solve the issue.

LP: #1694697
(cherry picked from commit 7ddf958)
(cherry picked from commit 423ba4a)
apt.systemd.daily: Use unattended-ugrade --download-only if available
Instead of passing -d, which enables a debugging mode; check if
unattended-upgrade supports an option --download-only (which is yet
to be implemented) and use that.

Closes: #863859
Gbp-Dch: Full
(cherry picked from commit 31c81a3,
                           cedf80c)

(cherry picked from commit 80b8089)
Commits on Sep 13, 2017
Robert Luberda + julian-klode
fix a "critical" typo in old changelog entry
This typo exposes a bug in apt-listchanges that prevents commands like
`apt-listchanges --show-all apt_*.deb' from showing the changelog.
The bug will be fixed in next upload of apt-listchanges, but I think
it would be nice have the typo fixed as well.

Closes: 866358
(cherry picked from commit ec0ebf7)
use port from SRV record instead of initial port
An SRV record includes a portnumber to use with the host given, but apt
was ignoring the portnumber and instead used either the port given by
the user for the initial host or the default port for the service.

In practice the service usually runs on another host on the default
port, so it tends to work as intended and even if not and apt can't get
a connection there it will gracefully fallback to contacting the initial
host with the right port, so its a user invisible bug most of the time.

(cherry picked from commit 9bdc090)
Reset failure reason when connection was successful
When APT was trying multiple addresses, any later error
somewhere else would be reported with ConnectionRefused
or ConnectionTimedOut as the FailReason because that
was set by early connect attempts. This causes APT to
handle the failures differently, leading to some weirdly
breaking test cases (like the changed one).

Add debugging to the previously failing test case so
we can find out when something goes wrong there again.

(cherry picked from commit d3a70c3)
http: A response with Content-Length: 0 has no content
APT considered any response with a Content-Length to have a
body, even if the value of the header was 0. A 0 length body
however, is equal to no body.

(cherry picked from commit d47fb34)
Gracefully terminate process when stopping apt-daily-upgrade
The main process is guessed by systemd. This prevents killing dpkg
run by unattended-upgrades in the middle of installing packages
and ensures graceful shutdown.

The timeout of 900 seconds after which apt-daily-upgrade.service
is killed is in sync with unattended-upgrades's timer.

LP: #1690980
(cherry picked from commit 78bc10d)
don't ask an uninit _system for supported archs
A libapt user who hasn't initialized _system likely has a reason, so we
shouldn't greet back with a segfault usually deep down in the callstack
for no reason. If the user had intended to pick up information from the
system, _system wouldn't be uninitialized after all.

LP: #1613184
SRU: 1.4.y
(cherry picked from commit cba5c5a)
apt-daily: Pull in network-online.target in service, not timer
There's no real point in pulling it in in the timer already,
and it it somewhat saver to do so in the service.

(cherry picked from commit 11417c1)

LP: #1716973
(cherry picked from commit 3e63968)
Showing with 2,876 additions and 1,423 deletions.
  1. +6 −2 .travis.yml
  2. +21 −22 COPYING.GPL
  3. +1 −1 apt-inst/deb/debfile.cc
  4. +43 −25 apt-pkg/acquire-item.cc
  5. +14 −3 apt-pkg/acquire-method.cc
  6. +28 −3 apt-pkg/acquire.cc
  7. +10 −9 apt-pkg/algorithms.cc
  8. +1 −1 apt-pkg/aptconfiguration.cc
  9. +2 −2 apt-pkg/cacheiterators.h
  10. +10 −4 apt-pkg/contrib/cmndline.cc
  11. +16 −18 apt-pkg/contrib/error.cc
  12. +58 −20 apt-pkg/contrib/fileutl.cc
  13. +4 −1 apt-pkg/contrib/fileutl.h
  14. +32 −5 apt-pkg/contrib/gpgv.cc
  15. +2 −0 apt-pkg/contrib/hashes.cc
  16. +4 −4 apt-pkg/contrib/hashes.h
  17. +6 −6 apt-pkg/contrib/hashsum_template.h
  18. +2 −0 apt-pkg/contrib/md5.cc
  19. +1 −1 apt-pkg/contrib/md5.h
  20. +1 −1 apt-pkg/contrib/proxy.cc
  21. +2 −0 apt-pkg/contrib/sha1.cc
  22. +1 −1 apt-pkg/contrib/sha1.h
  23. +3 −3 apt-pkg/contrib/sha2.h
  24. +111 −39 apt-pkg/contrib/strutl.cc
  25. +15 −0 apt-pkg/contrib/strutl.h
  26. +22 −7 apt-pkg/deb/deblistparser.cc
  27. +5 −2 apt-pkg/deb/debmetaindex.cc
  28. +17 −4 apt-pkg/deb/debsrcrecords.cc
  29. +28 −6 apt-pkg/deb/dpkgpm.cc
  30. +136 −136 apt-pkg/depcache.cc
  31. +1 −0 apt-pkg/depcache.h
  32. +1 −1 apt-pkg/edsp.cc
  33. +1 −0 apt-pkg/init.cc
  34. +39 −34 apt-pkg/install-progress.cc
  35. +1 −1 apt-pkg/install-progress.h
  36. +19 −18 apt-pkg/packagemanager.cc
  37. +5 −1 apt-pkg/pkgcache.cc
  38. +46 −30 apt-pkg/pkgcachegen.cc
  39. +4 −5 apt-pkg/policy.cc
  40. +56 −0 apt-pkg/prettyprinters.cc
  41. +37 −0 apt-pkg/prettyprinters.h
  42. +2 −2 apt-pkg/tagfile.cc
  43. +2 −2 apt-private/acqprogress.cc
  44. +1 −0 apt-private/private-cmndline.cc
  45. +1 −1 apt-private/private-download.cc
  46. +24 −8 apt-private/private-install.cc
  47. +3 −7 apt-private/private-source.cc
  48. +7 −7 apt-private/private-update.cc
  49. +10 −2 cmdline/apt-config.cc
  50. +3 −3 cmdline/apt-get.cc
  51. +39 −26 cmdline/apt-key.in
  52. +4 −2 completions/bash/apt
  53. +1 −1 configure.ac
  54. +11 −0 debian/NEWS
  55. +11 −0 debian/apt-daily-upgrade.service
  56. +11 −0 debian/apt-daily-upgrade.timer
  57. +5 −2 debian/apt-daily.service
  58. +1 −2 debian/apt-daily.timer
  59. +23 −1 debian/apt.apt-compat.cron.daily
  60. +1 −1 debian/apt.auto-removal.sh
  61. +102 −95 debian/apt.systemd.daily
  62. +201 −1 debian/changelog
  63. +2 −2 debian/rules
  64. +1 −1 debian/tests/run-tests
  65. +2 −2 doc/apt-get.8.xml
  66. +1 −1 doc/apt-verbatim.ent
  67. +2 −2 doc/po/apt-doc.pot
  68. +1 −1 doc/po/de.po
  69. +1 −1 doc/po/es.po
  70. +1 −1 doc/po/fr.po
  71. +1 −1 doc/po/it.po
  72. +1 −1 doc/po/ja.po
  73. +1 −1 doc/po/nl.po
  74. +1 −1 doc/po/pl.po
  75. +1 −1 doc/po/pt.po
  76. +1 −1 doc/po/pt_BR.po
  77. +13 −18 ftparchive/writer.cc
  78. +31 −0 methods/aptmethod.h
  79. +37 −9 methods/connect.cc
  80. +2 −7 methods/copy.cc
  81. +15 −2 methods/ftp.cc
  82. +3 −0 methods/gpgv.cc
  83. +10 −8 methods/http.cc
  84. +1 −1 methods/http.h
  85. +16 −1 methods/https.cc
  86. +1 −1 methods/https.h
  87. +52 −23 methods/rred.cc
  88. +36 −26 methods/server.cc
  89. +1 −1 methods/server.h
  90. +2 −14 methods/store.cc
  91. +10 −9 po/apt-all.pot
  92. +13 −12 po/ar.po
  93. +13 −12 po/ast.po
  94. +13 −12 po/bg.po
  95. +9 −8 po/bs.po
  96. +13 −12 po/ca.po
  97. +11 −10 po/cs.po
  98. +13 −12 po/cy.po
  99. +13 −12 po/da.po
  100. +13 −12 po/de.po
  101. +13 −12 po/dz.po
  102. +13 −12 po/el.po
  103. +11 −10 po/es.po
  104. +13 −12 po/eu.po
  105. +13 −12 po/fi.po
  106. +13 −12 po/fr.po
  107. +13 −12 po/gl.po
  108. +11 −10 po/hu.po
  109. +13 −12 po/it.po
  110. +12 −11 po/ja.po
  111. +13 −12 po/km.po
  112. +13 −12 po/ko.po
  113. +11 −10 po/ku.po
  114. +11 −10 po/lt.po
  115. +13 −12 po/mr.po
  116. +14 −13 po/nb.po
  117. +13 −12 po/ne.po
  118. +11 −10 po/nl.po
  119. +13 −12 po/nn.po
  120. +13 −12 po/pl.po
  121. +13 −12 po/pt.po
  122. +13 −12 po/pt_BR.po
  123. +13 −12 po/ro.po
  124. +13 −12 po/ru.po
  125. +13 −12 po/sk.po
  126. +14 −13 po/sl.po
  127. +13 −12 po/sv.po
  128. +13 −12 po/th.po
  129. +13 −12 po/tl.po
  130. +110 −77 po/tr.po
  131. +13 −12 po/uk.po
  132. +13 −12 po/vi.po
  133. +19 −18 po/zh_CN.po
  134. +12 −11 po/zh_TW.po
  135. +13 −0 shippable.yml
  136. +0 −68 test/integration/Packages-bug-604222-new-and-autoremove
  137. +17 −0 test/integration/Packages-github-23-too-long-dependency-line
  138. +12 −3 test/integration/framework
  139. +32 −0 test/integration/skip-apt-dropprivs
  140. +0 −10 test/integration/status-bug-604222-new-and-autoremove
  141. +12 −0 test/integration/status-github-23-too-long-dependency-line
  142. +1 −1 test/integration/test-apt-cdrom
  143. +1 −0 test/integration/test-apt-cli-update
  144. +36 −0 test/integration/test-apt-get-autoremove
  145. +12 −7 test/integration/test-apt-get-build-dep-file
  146. +3 −3 test/integration/test-apt-get-changelog
  147. +26 −8 test/integration/test-apt-get-install-deb
  148. +26 −0 test/integration/test-apt-key
  149. +4 −0 test/integration/test-apt-update-expected-size
  150. +5 −1 test/integration/test-apt-update-file
  151. +9 −2 test/integration/test-apt-update-ims
  152. +44 −20 test/integration/test-bug-604222-new-and-autoremove
  153. +1 −1 test/integration/test-bug-738785-switch-protocol
  154. +1 −0 test/integration/test-bug-778375-server-has-no-reason-phrase
  155. +27 −0 test/integration/test-bug-829651
  156. +46 −0 test/integration/test-bug-838779-untrusted-to-trusted-Release-hit
  157. +58 −0 test/integration/test-bug-lp1694697-build-dep-architecture-limited-alternative
  158. +3 −0 test/integration/test-essential-force-loopbreak
  159. +17 −0 test/integration/test-github-23-too-long-dependency-line
  160. +9 −3 test/integration/test-kernel-helper-autoremove
  161. +3 −0 test/integration/test-method-rred
  162. +15 −5 test/integration/test-pdiff-usage
  163. +15 −1 test/integration/test-releasefile-verification
  164. +2 −2 test/integration/test-sourceslist-trusted-options
  165. +35 −0 test/integration/test-srcrecord
  166. +19 −0 test/integration/test-ubuntu-bug-1651923-requote-https-uri
  167. +4 −0 test/integration/test-ubuntu-bug-761175-remove-purge
  168. +27 −0 test/interactive-helper/aptdropprivs.cc
  169. +3 −0 test/interactive-helper/aptwebserver.cc
  170. +6 −0 test/interactive-helper/makefile
  171. +9 −0 test/libapt/apt-proxy-script
  172. +9 −1 test/libapt/commandline_test.cc
  173. +35 −0 test/libapt/fileutl_test.cc
  174. +73 −0 test/libapt/strutil_test.cc
  175. +38 −0 test/libapt/uri_test.cc
View
@@ -1,17 +1,21 @@
language: cpp
sudo: required
dist: trusty
+env:
+ - TEST_SUITE=user
+ - TEST_SUITE=root
before_install:
- sudo add-apt-repository 'deb http://archive.ubuntu.com/ubuntu/ wily main universe' -y
- |
sudo sh -c '/bin/echo -e "Package: *\nPin: release n=wily\nPin-Priority: 1" > /etc/apt/preferences.d/wily'
- sudo apt-get update -qq
install:
+ - sudo apt-get -qq -y -t wily install libstdc++-5-dev g++
- sudo ./prepare-release travis-ci
- sudo apt-get -qq -y -t wily install gettext liblz4-dev python3-apt
- make
script:
- make test
- - ./test/integration/run-tests -q
+ - test "$TEST_SUITE" != "user" || ./test/integration/run-tests -q
- sudo adduser --force-badname --system --home /nonexistent --no-create-home --quiet _apt || true
- - sudo ./test/integration/run-tests -q
+ - test "$TEST_SUITE" != "root" || sudo ./test/integration/run-tests -q
View
@@ -1,12 +1,12 @@
- GNU GENERAL PUBLIC LICENSE
- Version 2, June 1991
+ GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
- Copyright (C) 1989, 1991 Free Software Foundation, Inc.
- 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
- Preamble
+ Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
@@ -15,7 +15,7 @@ software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
-the GNU Library General Public License instead.) You can apply it to
+the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
@@ -55,8 +55,8 @@ patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
-
- GNU GENERAL PUBLIC LICENSE
+
+ GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
@@ -110,7 +110,7 @@ above, provided that you also meet all of these conditions:
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
-
+
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
@@ -168,7 +168,7 @@ access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
-
+
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
@@ -225,7 +225,7 @@ impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
-
+
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
@@ -255,7 +255,7 @@ make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
- NO WARRANTY
+ NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
@@ -277,9 +277,9 @@ YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
- END OF TERMS AND CONDITIONS
-
- How to Apply These Terms to Your New Programs
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
@@ -291,7 +291,7 @@ convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
- Copyright (C) 19yy <name of author>
+ Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
@@ -303,17 +303,16 @@ the "copyright" line and a pointer to where the full notice is found.
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
- Gnomovision version 69, Copyright (C) 19yy name of author
+ Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
@@ -336,5 +335,5 @@ necessary. Here is a sample; alter the names:
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
-library. If this is what you want to do, use the GNU Library General
+library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.
View
@@ -113,7 +113,7 @@ bool debDebFile::ExtractTarMember(pkgDirStream &Stream,const char *Name)
Member = AR.FindMember(std::string(Name).append(c->Extension).c_str());
if (Member == NULL)
continue;
- Compressor = c->Binary;
+ Compressor = c->Name;
break;
}
View
@@ -1666,10 +1666,16 @@ void pkgAcqMetaSig::Done(string const &Message, HashStringList const &Hashes,
}
else if(MetaIndex->CheckAuthDone(Message) == true)
{
- if (TransactionManager->IMSHit == false)
+ auto const Releasegpg = GetFinalFilename();
+ auto const Release = MetaIndex->GetFinalFilename();
+ // if this is an IMS-Hit on Release ensure we also have the the Release.gpg file stored
+ // (previously an unknown pubkey) – but only if the Release file exists locally (unlikely
+ // event of InRelease removed from the mirror causing fallback but still an IMS-Hit)
+ if (TransactionManager->IMSHit == false ||
+ (FileExists(Releasegpg) == false && FileExists(Release) == true))
{
- TransactionManager->TransactionStageCopy(this, DestFile, GetFinalFilename());
- TransactionManager->TransactionStageCopy(MetaIndex, MetaIndex->DestFile, MetaIndex->GetFinalFilename());
+ TransactionManager->TransactionStageCopy(this, DestFile, Releasegpg);
+ TransactionManager->TransactionStageCopy(MetaIndex, MetaIndex->DestFile, Release);
}
}
}
@@ -1804,6 +1810,18 @@ void pkgAcqDiffIndex::QueueOnIMSHit() const /*{{{*/
new pkgAcqIndexDiffs(Owner, TransactionManager, Target);
}
/*}}}*/
+static bool RemoveFileForBootstrapLinking(bool const Debug, std::string const &For, std::string const &Boot)/*{{{*/
+{
+ if (FileExists(Boot) && RemoveFile("Bootstrap-linking", Boot) == false)
+ {
+ if (Debug)
+ std::clog << "Bootstrap-linking for patching " << For
+ << " by removing stale " << Boot << " failed!" << std::endl;
+ return false;
+ }
+ return true;
+}
+ /*}}}*/
bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
{
// failing here is fine: our caller will take care of trying to
@@ -1824,6 +1842,7 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
HashStringList ServerHashes;
unsigned long long ServerSize = 0;
+ auto const &posix = std::locale::classic();
for (char const * const * type = HashString::SupportedHashes(); *type != NULL; ++type)
{
std::string tagname = *type;
@@ -1835,6 +1854,7 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
string hash;
unsigned long long size;
std::stringstream ss(tmp);
+ ss.imbue(posix);
ss >> hash >> size;
if (unlikely(hash.empty() == true))
continue;
@@ -1913,6 +1933,7 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
string hash, filename;
unsigned long long size;
std::stringstream ss(tmp);
+ ss.imbue(posix);
while (ss >> hash >> size >> filename)
{
@@ -1971,6 +1992,7 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
string hash, filename;
unsigned long long size;
std::stringstream ss(tmp);
+ ss.imbue(posix);
while (ss >> hash >> size >> filename)
{
@@ -2008,6 +2030,7 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
string hash, filename;
unsigned long long size;
std::stringstream ss(tmp);
+ ss.imbue(posix);
// FIXME: all of pdiff supports only .gz compressed patches
while (ss >> hash >> size >> filename)
@@ -2138,23 +2161,15 @@ bool pkgAcqDiffIndex::ParseDiffIndex(string const &IndexDiffFile) /*{{{*/
if (unlikely(Final.empty())) // because we wouldn't be called in such a case
return false;
std::string const PartialFile = GetPartialFileNameFromURI(Target.URI);
- if (FileExists(PartialFile) && RemoveFile("Bootstrap-linking", PartialFile) == false)
- {
- if (Debug)
- std::clog << "Bootstrap-linking for patching " << CurrentPackagesFile
- << " by removing stale " << PartialFile << " failed!" << std::endl;
+ std::string const PatchedFile = GetKeepCompressedFileName(PartialFile + "-patched", Target);
+ if (RemoveFileForBootstrapLinking(Debug, CurrentPackagesFile, PartialFile) == false ||
+ RemoveFileForBootstrapLinking(Debug, CurrentPackagesFile, PatchedFile) == false)
return false;
- }
for (auto const &ext : APT::Configuration::getCompressorExtensions())
{
- std::string const Partial = PartialFile + ext;
- if (FileExists(Partial) && RemoveFile("Bootstrap-linking", Partial) == false)
- {
- if (Debug)
- std::clog << "Bootstrap-linking for patching " << CurrentPackagesFile
- << " by removing stale " << Partial << " failed!" << std::endl;
+ if (RemoveFileForBootstrapLinking(Debug, CurrentPackagesFile, PartialFile + ext) == false ||
+ RemoveFileForBootstrapLinking(Debug, CurrentPackagesFile, PatchedFile + ext) == false)
return false;
- }
}
std::string const Ext = Final.substr(CurrentPackagesFile.length());
std::string const Partial = PartialFile + Ext;
@@ -2435,9 +2450,10 @@ std::string pkgAcqIndexDiffs::Custom600Headers() const /*{{{*/
if(State != StateApplyDiff)
return pkgAcqBaseIndex::Custom600Headers();
std::ostringstream patchhashes;
- HashStringList const ExpectedHashes = available_patches[0].patch_hashes;
- for (HashStringList::const_iterator hs = ExpectedHashes.begin(); hs != ExpectedHashes.end(); ++hs)
- patchhashes << "\nPatch-0-" << hs->HashType() << "-Hash: " << hs->HashValue();
+ for (auto && hs : available_patches[0].result_hashes)
+ patchhashes << "\nStart-" << hs.HashType() << "-Hash: " << hs.HashValue();
+ for (auto && hs : available_patches[0].patch_hashes)
+ patchhashes << "\nPatch-0-" << hs.HashType() << "-Hash: " << hs.HashValue();
patchhashes << pkgAcqBaseIndex::Custom600Headers();
return patchhashes.str();
}
@@ -2584,12 +2600,14 @@ std::string pkgAcqIndexMergeDiffs::Custom600Headers() const /*{{{*/
return pkgAcqBaseIndex::Custom600Headers();
std::ostringstream patchhashes;
unsigned int seen_patches = 0;
+ for (auto && hs : (*allPatches)[0]->patch.result_hashes)
+ patchhashes << "\nStart-" << hs.HashType() << "-Hash: " << hs.HashValue();
for (std::vector<pkgAcqIndexMergeDiffs *>::const_iterator I = allPatches->begin();
I != allPatches->end(); ++I)
{
HashStringList const ExpectedHashes = (*I)->patch.patch_hashes;
for (HashStringList::const_iterator hs = ExpectedHashes.begin(); hs != ExpectedHashes.end(); ++hs)
- patchhashes << "\nPatch-" << seen_patches << "-" << hs->HashType() << "-Hash: " << hs->HashValue();
+ patchhashes << "\nPatch-" << std::to_string(seen_patches) << "-" << hs->HashType() << "-Hash: " << hs->HashValue();
++seen_patches;
}
patchhashes << pkgAcqBaseIndex::Custom600Headers();
@@ -2637,10 +2655,6 @@ void pkgAcqIndex::Init(string const &URI, string const &URIDesc,
DestFile = GetPartialFileNameFromURI(URI);
NextCompressionExtension(CurrentCompressionExtension, CompressionExtensions, false);
- // store file size of the download to ensure the fetcher gives
- // accurate progress reporting
- FileSize = GetExpectedHashes().FileSize();
-
if (CurrentCompressionExtension == "uncompressed")
{
Desc.URI = URI;
@@ -2677,6 +2691,9 @@ void pkgAcqIndex::Init(string const &URI, string const &URIDesc,
DestFile = DestFile + '.' + CurrentCompressionExtension;
}
+ // store file size of the download to ensure the fetcher gives
+ // accurate progress reporting
+ FileSize = GetExpectedHashes().FileSize();
Desc.Description = URIDesc;
Desc.Owner = this;
@@ -3239,7 +3256,8 @@ std::string pkgAcqChangelog::URI(pkgCache::VerIterator const &Ver) /*{{{*/
pkgCache::PkgIterator const Pkg = Ver.ParentPkg();
if (Pkg->CurrentVer != 0 && Pkg.CurrentVer() == Ver)
{
- std::string const basename = std::string("/usr/share/doc/") + Pkg.Name() + "/changelog";
+ std::string const root = _config->FindDir("Dir");
+ std::string const basename = root + std::string("usr/share/doc/") + Pkg.Name() + "/changelog";
std::string const debianname = basename + ".Debian";
if (FileExists(debianname))
return "copy://" + debianname;
View
@@ -80,9 +80,20 @@ void pkgAcqMethod::Fail(bool Transient)
{
string Err = "Undetermined Error";
if (_error->empty() == false)
- _error->PopMessage(Err);
- _error->Discard();
- Fail(Err,Transient);
+ {
+ Err.clear();
+ while (_error->empty() == false)
+ {
+ std::string msg;
+ if (_error->PopMessage(msg))
+ {
+ if (Err.empty() == false)
+ Err.append("\n");
+ Err.append(msg);
+ }
+ }
+ }
+ Fail(Err, Transient);
}
/*}}}*/
// AcqMethod::Fail - A fetch has failed /*{{{*/
Oops, something went wrong.

No commit comments for this range