Container image with infra tools (terraform, terragrunt, aws cli, helm, kubectl...). Useful for CI/CD.
Table of Contents generated with mtoc
- Badges
- About
- Available tools
- Global gitconfig for internal git servers with self signed certificate
- Architecture
- Lint
- Image security scan with Trivy
- Running locally
- TODO
- CHANGELOG
- LICENSE
How many times do you need a container image with tools like terraform, helm, kubectl, aws cli, terragrunt
... among many others? Aren't you tired of having to maintain all of them in each repository, instead of having one "general" one that can be used in multiple repos?
Available tags: https://hub.docker.com/r/containerscrew/infratools/tags
Tool | Available |
---|---|
Terraform | β |
Terragrunt | β |
Kubectl | β |
Helm | β |
AWS CLI | β |
tftools | β |
tfenv | β |
ohmyzsh | β |
Take a look to all the available installed tools inside the Dockerfile
Alpine core packages: https://pkgs.alpinelinux.org/packages
AWS cli v2 is installed directly from official alpine repository. If you need to look for other version, visit this page
For every new version, a new git tag will be created. You can see versioning inside Dockerfile
Tip
By default, a version of terraform is installed using tfenv
. If you have the .terraform-version
file in your terraform/terragrunt
repository, tfenv
should detect the version and install it automatically.
Or change it yourself, for example, within a pipeline:
tfenv use 1.5.5
You can install python libraries using `pip3`. BTW, you will see the following error:
Error:
Γ This environment is externally managed β°β> The system-wide python installation should be maintained using the system package manager (apk) only.If the package in question is not packaged already (and hence installable via "apk add py3-somepackage"), please consider installing it inside a virtual environment, e.g.: python3 -m venv /path/to/venv . /path/to/venv/bin/activate pip install mypackage To exit the virtual environment, run: deactivate The virtual environment is not deleted, and can be re-entered by re-sourcing the activate file. To automatically manage virtual environments, consider using pipx (from the pipx package).
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification.
Install library + deps:
pipx install boto3 --include-deps
Install a package:
pipx install your-package-name # visit pypip
python3 -m venv /path/to/venv
. /path/to/venv/bin/activate
pip3 install mypackage
pip3 install boto3 --break-system-packages
If using custom git repository with self signed certificate, just edit in your ~/.gitconfig
:
[http "https://gitlab.server.internal"]
##################################
# Self Signed Server Certificate #
##################################
sslCAInfo = /path/to/your/certificate.crt
#sslCAPath = /path/to/selfCA/
sslVerify = true # or set to false if you trust
Arch | Supported | Tested |
---|---|---|
amd64 | β | β |
arm64 | β | β |
make hadolint
This image uses trivy github action as a tool for security scanning.
Take a look to the official repo of Trivy.
make build-image
make trivy-scan # trivy image docker.io/containerscrew/infratools:test
make local-build
make local-run
# Or all in one
make local-build-run
Use other version(tag) if needed (edit the Makefile).
Example run.sh
:
#!/bin/bash
CONTAINER_NAME="infratools"
CONTAINER_VERSION="v2.5.2"
if [ "$(docker ps | grep -c "${CONTAINER_NAME}")" -gt 0 ];then
docker exec -ti ${CONTAINER_NAME} zsh
else
docker run -tid \
--name ${CONTAINER_NAME} \
--rm \
-h ${CONTAINER_NAME} \
-v "$(pwd)"/:/code \
-v ~/.ssh:/home/infratools/.ssh \
-v ~/.aws:/home/infratools/.aws \
-v ~/.kube:/home/infratools/.kube \
-w /code/ \
-e AWS_DEFAULT_REGION=eu-west-1 \
--dns 1.1.1.1 \
docker.io/containerscrew/infratools:${CONTAINER_VERSION}
fi
docker exec -ti "${CONTAINER_NAME}" zsh
Important
ZSH history will be saved in /code repository to allow persistent command history.
So, If you don't want to push the .zsh_history to git, add the file to .gitignore
.
- Add other dynamic version switchers for other tools (tgswitch, kubectl...)
- Seperate pipeline for build release + build in other branch