-
Notifications
You must be signed in to change notification settings - Fork 33
Installation & Configuration
What is the location of docker's logging driver setup
/etc/docker/daemon.json
What is the property name of logging setup
log-driver
Specifying the logging driver through CLI
docker container run -d --name testjson --log-driver json-file httpd:latest
Configure splunk as logging-driver
{
"log-driver": "splunk",
"log-opts": {
"splunk-token": "",
"splunk-url": "",
...
}
}
Configure journald as logging-driver
{ "log-driver": "journald" }
drain manager nodes to make them unavailable as worker nodes
docker node update --availability drain
To demote the node to a worker
docker node demote
remove a worker node
docker node rm
Location of docker swarm state and manager logs
/var/lib/docker/swarm/
take a backup while the manager is running
Hot Backup
Restore from a backup
"1. Remove the contents of the /var/lib/docker/swarm directory on the new swarm. 2. Restore the /var/lib/docker/swarm directory with the contents of the backup."
Namespaces
-ProcessID
-Mount
-IPC
-User
Control Groups
control groups provide resource limitation and reportiung capability within the container space. They allow granular control over what resources are allocated to containers and when they are alloted.
-CPU
-Memory
-Network Bandwidth
-Disk
-Priority
two ways to restore UCP
- On a manager node of an existing swarm which does not have UCP installed. In this case, UCP restore will use the existing swarm.
- On a docker engine that isn’t participating in a swarm. In this case, a new swarm is created and UCP is restored on top."
Command to backup UCP
"docker container run --log-driver none --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:2.2.5 backup --interactive > /tmp/backup.tar"
Command to restore UCP
"docker container run --rm -i --name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:2.2.5 restore < /tmp/backup.tar"
To create a backup of DTR you need to
- Backup image content
- Backup DTR metadata
Backup image content
"sudo tar -cf {{ image_backup_file }} \
$(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>))"
Backup DTR metadata
"read -sp 'ucp password: ' UCP_PASSWORD; \
docker run --log-driver none -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr:2.3.5 backup \
--ucp-url <ucp-url> \
--ucp-insecure-tls \
--ucp-username <ucp-username> \
--existing-replica-id <replica-id> > backup-metadata.tar"
To restore DTR, you need to
"- Stop any DTR containers that might be running
- Restore the images from a backup
- Restore DTR metadata from a backup
- Re-fetch the vulnerability database"
command to stop DTR containers
"docker run -it --rm \
docker/dtr:2.3.5 destroy \
--ucp-insecure-tls"
command to restore images
sudo tar -xzf backup-images.tar -C /var/lib/docker/volumes
Restore DTR metadata
"read -sp 'ucp password: ' UCP_PASSWORD; \
docker run -i --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr:2.3.5 restore \
--ucp-url <ucp-url> \
--ucp-insecure-tls \
--ucp-username <ucp-username> \
--ucp-node <hostname> \
--replica-id <replica-id> \
--dtr-external-url <dtr-external-url> < backup-metadata.tar"
certificate store
"/etc/docker/certs.d/ <-- Certificate directory
└── localhost:5000 <-- Hostname:port
├── client.cert <-- Client certificate
├── client.key <-- Client key
└── ca.crt <-- Certificate authority that signed the registry certificate"
create client certificates
"$ openssl genrsa -out client.key 4096
$ openssl req -new -x509 -text -key client.key -out client.cert"
Create a CA, server and client keys with OpenSSL
openssl genrsa -aes256 -out ca-key.pem 4096