Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce packaging tests for Docker #46599

Merged
merged 18 commits into from Oct 5, 2019
Merged
Changes from 1 commit
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
File filter...
Filter file types
Jump to…
Jump to file or symbol
Failed to load files and symbols.

Always

Just for now

@@ -1,5 +1,5 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# vi:ft=ruby ts=2 sw=2 sts=2 et:

# This Vagrantfile exists to test packaging. Read more about its use in the
# vagrant section in TESTING.asciidoc.
@@ -183,6 +183,35 @@ def deb_common(config, name, extra: '')
install_command: 'apt-get install -y',
extra: extra_with_lintian
)
deb_docker(config)

This comment has been minimized.

Copy link
@dliappis

dliappis Sep 12, 2019

Contributor

As discussed privately, there's not much value running this on all distros (and not all support Docker like centos-6) so better be specific about where we call this.

This comment has been minimized.

Copy link
@atorok

atorok Sep 12, 2019

Contributor

I think it would be better to bake this into the image rather than install on-demand.
The CI packaging images already have docker ( at least the ones that support it ),
so we would need to add it to vagrant images only.

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Sep 12, 2019

Author Contributor

I'll sync with Infra about this, and remove it from Vagrantfile once the boxes include Docker.

end

def deb_docker(config)
config.vm.provision 'install Docker', run: 'always', type: 'shell', inline: <<-SHELL
# Install packages to allow apt to use a repository over HTTPS
apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
# Set up the stable Docker repository
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
# Install Docker. Unlike Fedora and CentOS, this also start the daemon.
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io
# Add vagrant to the Docker group, so that it can run commands
usermod -aG docker vagrant
SHELL
end

def rpm_common(config, name)
@@ -193,6 +222,26 @@ def rpm_common(config, name)
update_tracking_file: '/var/cache/yum/last_update',
install_command: 'yum install -y'
)
rpm_docker(config)
This conversation was marked as resolved by pugnascotia

This comment has been minimized.

Copy link
@dliappis

dliappis Sep 12, 2019

Contributor

Same comment as above, especially since centos-7/oel-6 won't support this.

end

def rpm_docker(config)
config.vm.provision 'install Docker', run: 'always', type: 'shell', inline: <<-SHELL
# Install prerequisites
yum install -y yum-utils device-mapper-persistent-data lvm2
# Add repository
yum-config-manager -y --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker
yum install -y docker-ce docker-ce-cli containerd.io
# Start Docker
systemctl start docker
# Add vagrant to the Docker group, so that it can run commands
usermod -aG docker vagrant
SHELL
end

def dnf_common(config, name)
@@ -209,6 +258,26 @@ def dnf_common(config, name)
install_command: 'dnf install -y',
install_command_retries: 5
)
dnf_docker(config)
end

def dnf_docker(config)
config.vm.provision 'install Docker', run: 'always', type: 'shell', inline: <<-SHELL
# Install prerequisites
dnf -y install dnf-plugins-core
# Add repository
dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
# Install Docker
dnf install -y docker-ce docker-ce-cli containerd.io
# Start Docker
systemctl start docker
# Add vagrant to the Docker group, so that it can run commands
usermod -aG docker vagrant
SHELL
end

def suse_common(config, name, extra: '')
@@ -172,7 +172,7 @@ private static Version getUpgradeVersion(Project project) {
String box = project.getName();

// setup jdks used by the distro tests, and by gradle executing

NamedDomainObjectContainer<Jdk> jdksContainer = JdkDownloadPlugin.getContainer(project);
String platform = box.contains("windows") ? "windows" : "linux";
Jdk systemJdk = createJdk(jdksContainer, "system", SYSTEM_JDK_VERSION, platform);
@@ -309,13 +309,13 @@ public String toString() {
}
});
}

private List<ElasticsearchDistribution> configureDistributions(Project project, Version upgradeVersion) {
NamedDomainObjectContainer<ElasticsearchDistribution> distributions = DistributionDownloadPlugin.getContainer(project);
List<ElasticsearchDistribution> currentDistros = new ArrayList<>();
List<ElasticsearchDistribution> upgradeDistros = new ArrayList<>();

for (Type type : Arrays.asList(Type.DEB, Type.RPM)) {
for (Type type : Arrays.asList(Type.DEB, Type.RPM, Type.DOCKER)) {
for (Flavor flavor : Flavor.values()) {
for (boolean bundledJdk : Arrays.asList(true, false)) {
addDistro(distributions, type, null, flavor, bundledJdk, VersionProperties.getElasticsearch(), currentDistros);
@@ -366,7 +366,8 @@ private static void addDistro(NamedDomainObjectContainer<ElasticsearchDistributi
if (type == Type.ARCHIVE) {
d.setPlatform(platform);
}
d.setBundledJdk(bundledJdk);
// We don't test Docker images with a non-bundled JDK
d.setBundledJdk(type == Type.DOCKER || bundledJdk);

This comment has been minimized.

Copy link
@rjernst

rjernst Oct 4, 2019

Member

This should be handled outside of addDistro, otherwise we are adding the same distribution twice? ie around line 325 where this method is called

d.setVersion(version);
});
container.add(distro);
@@ -378,10 +379,17 @@ private static boolean isWindows(Project project) {
}

private static String distroId(Type type, Platform platform, Flavor flavor, boolean bundledJdk) {
return flavor + "-" + (type == Type.ARCHIVE ? platform + "-" : "") + type + (bundledJdk ? "" : "-no-jdk");
// We don't test Docker images with a non-bundled JDK
return flavor + "-" + (type == Type.ARCHIVE ? platform + "-" : "") + type + (bundledJdk || type == Type.DOCKER ? "" : "-no-jdk");
}

private static String destructiveDistroTestTaskName(ElasticsearchDistribution distro) {
return "destructiveDistroTest." + distroId(distro.getType(), distro.getPlatform(), distro.getFlavor(), distro.getBundledJdk());
Type type = distro.getType();
return "destructiveDistroTest." + distroId(
type,
distro.getPlatform(),
distro.getFlavor(),
// We don't test Docker images with a non-bundled JDK
type == Type.DOCKER || distro.getBundledJdk());

This comment has been minimized.

Copy link
@rjernst

rjernst Oct 4, 2019

Member

This shouldn't be necessary, see my previous comment about where we add the distros. We should never add a docker distro with bundledJdk=false

}
}
@@ -93,8 +93,9 @@ void setupDistributions(Project project) {
// for the distribution as a file, just depend on the artifact directly
dependencies.add(distribution.configuration.getName(), dependencyNotation(project, distribution));

// no extraction allowed for rpm or deb
if (distribution.getType() != Type.RPM && distribution.getType() != Type.DEB) {
// no extraction allowed for rpm, deb or docker
Type distroType = distribution.getType();
if (distroType != Type.RPM && distroType != Type.DEB && distroType != Type.DOCKER) {
This conversation was marked as resolved by pugnascotia

This comment has been minimized.

Copy link
@atorok

atorok Sep 12, 2019

Contributor

Would it make more sense model this in Type itself ?
So this would become something like if (distroType.shouldExtract()) ?

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Sep 12, 2019

Author Contributor

I like that idea.

// for the distribution extracted, add a root level task that does the extraction, and depend on that
// extracted configuration as an artifact consisting of the extracted distribution directory
dependencies.add(distribution.getExtracted().configuration.getName(),
@@ -214,7 +215,6 @@ private Object dependencyNotation(Project project, ElasticsearchDistribution dis
}

private static Dependency projectDependency(Project project, String projectPath, String projectConfig) {

if (project.findProject(projectPath) == null) {
throw new GradleException("no project [" + projectPath + "], project names: " + project.getRootProject().getAllprojects());
}
@@ -226,11 +226,20 @@ private static Dependency projectDependency(Project project, String projectPath,

private static String distributionProjectPath(ElasticsearchDistribution distribution) {
String projectPath = ":distribution";
if (distribution.getType() == Type.INTEG_TEST_ZIP) {
projectPath += ":archives:integ-test-zip";
} else {
projectPath += distribution.getType() == Type.ARCHIVE ? ":archives:" : ":packages:";
projectPath += distributionProjectName(distribution);
switch (distribution.getType()) {
case INTEG_TEST_ZIP:
projectPath += ":archives:integ-test-zip";
break;

case DOCKER:

This comment has been minimized.

Copy link
@atorok

atorok Sep 30, 2019

Contributor

@pugnascotia the download distribution plugin is meant to be used outside of our build,
@rjernst is working on adding support to download some of the artifacts that are not already downloadable. I think in the case of docker, this could be set up differently to make that work easier.
For example, ensureImageIsLoaded from the test should be part of this plugin, since docker is distributed trough the registry, so there will be nothing to do here for the released versions.

Alternatively @rjernst, it might be more straight forward and easier to maintain to split the download plugin into two. One that deals exclusively with versions that are an actual download, and restrict that to some distribution types and one that deals with unreleased versions, with the lather being applied only as part of our build.

I don't necessarily see this as reason not to merge this PR.

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Oct 1, 2019

Author Contributor

Yes, I feel that we should merge this and come back to some of these issues. At least we'll have something build upon.

projectPath += ":docker:";
projectPath += distributionProjectName(distribution);
break;

default:
projectPath += distribution.getType() == Type.ARCHIVE ? ":archives:" : ":packages:";
projectPath += distributionProjectName(distribution);
break;
}
return projectPath;
}
@@ -243,9 +252,12 @@ private static String distributionProjectName(ElasticsearchDistribution distribu
if (distribution.getBundledJdk() == false) {
projectName += "no-jdk-";
}

if (distribution.getType() == Type.ARCHIVE) {
Platform platform = distribution.getPlatform();
projectName += platform.toString() + (platform == Platform.WINDOWS ? "-zip" : "-tar");
} else if (distribution.getType() == Type.DOCKER) {
projectName += "docker-export";
} else {
projectName += distribution.getType();
}
@@ -46,7 +46,8 @@ public String toString() {
INTEG_TEST_ZIP,
ARCHIVE,
RPM,
DEB;
DEB,
DOCKER;

@Override
public String toString() {
@@ -171,11 +172,16 @@ public String toString() {
}

public Extracted getExtracted() {
if (getType() == Type.RPM || getType() == Type.DEB) {
throw new UnsupportedOperationException("distribution type [" + getType() + "] for " +
"elasticsearch distribution [" + name + "] cannot be extracted");
switch (getType()) {
case DEB:
case DOCKER:
case RPM:
throw new UnsupportedOperationException("distribution type [" + getType() + "] for " +
"elasticsearch distribution [" + name + "] cannot be extracted");

default:
return extracted;
}
return extracted;
}

@Override
@@ -217,7 +223,7 @@ void finalizeValues() {
if (platform.isPresent() == false) {
platform.set(CURRENT_PLATFORM);
}
} else { // rpm or deb
} else { // rpm, deb or docker
if (platform.isPresent()) {
throw new IllegalArgumentException("platform not allowed for elasticsearch distribution ["
+ name + "] of type [" + getType() + "]");
@@ -170,6 +170,32 @@ void addBuildDockerImage(final boolean oss) {
BuildPlugin.requireDocker(buildDockerImageTask)
}

/**
* Exports the generated Docker image to disk, so that it can be easily
* reloaded, for example into a VM. Although this involves writing out
* the entire image, it's still quicker than rebuilding the main archive
* in the VM.
*/
void addExportDockerImage(final boolean oss) {
This conversation was marked as resolved by pugnascotia

This comment has been minimized.

Copy link
@atorok

atorok Sep 12, 2019

Contributor

Should this be called below ? Seems like it's never called.

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Sep 12, 2019

Author Contributor

Ah, good catch, that was from an earlier iteration.

def exportTaskName = taskName("export", oss, "DockerImage")
def tarFile = "${buildDir}/elasticsearch${oss ? '-oss' : ''}_test.docker.tar"

task(exportTaskName, type: LoggedExec) {
executable 'docker'
args "save",
"-o",
tarFile,
"elasticsearch${oss ? '-oss' : ''}:test"
}

artifacts.add(oss ? 'archives' : 'default', file(tarFile)) {
type 'tar'
name "elasticsearch${oss ? '-oss' : ''}"
builtBy exportTaskName
}
}


for (final boolean oss : [false, true]) {
addCopyDockerContextTask(oss)
addBuildDockerImage(oss)
@@ -184,3 +210,33 @@ assemble.dependsOn "buildDockerImage"
if (tasks.findByName("composePull")) {
tasks.composePull.enabled = false
}

subprojects { Project subProject ->
if (subProject.name.contains('docker-export')) {
apply plugin: 'distribution'

def oss = subProject.name.startsWith('oss')

This comment has been minimized.

Copy link
@rjernst

rjernst Oct 4, 2019

Member

Please use static types. It makes the code easier to reason about, eg String here.


def exportTaskName = taskName("export", oss, "DockerImage")
def buildTaskName = taskName("build", oss, "DockerImage")
def tarFile = "${parent.buildDir}/elasticsearch${oss ? '-oss' : ''}_test.${VersionProperties.elasticsearch}.docker.tar"

final Task exportDockerImageTask = task(exportTaskName, type: LoggedExec) {
executable 'docker'
args "save",
"-o",
tarFile,
"elasticsearch${oss ? '-oss' : ''}:test"
}

exportDockerImageTask.dependsOn(parent.tasks.getByName(buildTaskName))

artifacts.add('default', file(tarFile)) {
type 'tar'
name "elasticsearch${oss ? '-oss' : ''}"
builtBy exportTaskName
}

assemble.dependsOn exportTaskName
}
}
@@ -0,0 +1,2 @@
// This file is intentionally blank. All configuration of the
// export is done in the parent project.

This comment has been minimized.

Copy link
@atorok

atorok Sep 12, 2019

Contributor

Why do we need a project for the extraction ? Could it just be a task on parent ?

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Sep 12, 2019

Author Contributor

It's to do with DistributionDownloadPlugin - as far as I can tell, it's written to create depdencies between ES distributions and the default config on a project. I'm new to Gradle, but it looks like it doesn't depend on a task because the plugin needs to be able to locate the built archives. I'd be very happy to be pointed at better ways of doing this. I had to spend a while deciphering what was going on between DistroTestPlugin and DistributionDownloadPlugin.

This comment has been minimized.

Copy link
@pugnascotia

pugnascotia Sep 12, 2019

Author Contributor

Also, the same pattern is used in :distribution:packages.

This comment has been minimized.

Copy link
@atorok

atorok Sep 12, 2019

Contributor

I see, it's a bit strange that this we have projects for export only here, but I'll defer to @rjernst

@@ -0,0 +1,2 @@
// This file is intentionally blank. All configuration of the
// export is done in the parent project.
ProTip! Use n and p to navigate between commits in a pull request.
You can’t perform that action at this time.