Just some stuff that I'm too bored to write again and too lasy to develop into separate tool(code). DevOps Automation routines mostly.
Warning! Use it strictly on your own risk
aws --profile PROFILE iam update-login-profile --user-name USER --password
Search for something in some trashcan S3 bucket. Second is iterate over buckets with threaded tool(s4cmd)
aws --profile PROFILE s3 ls s3://BUCKET/ --recursive | grep PATTERN
for b in $(s4cmd -r ls | grep ftp | awk '{print $4}') ; do s4cmd -r ls ${b}1465/ ; done
aws --profile PROFILE ec2 describe-subnets --filters "Name=vpc-id,Values=VPC_ID" | jq '.Subnets[] | .SubnetId + "=" + "\(.AvailableIpAddressCount)"'
for u in $(aws --profile PROFILE iam list-users | jq ".Users[].UserName" --raw-output); do aws --profile PROFILE iam list-access-keys --user $u | jq '.AccessKeyMetadata[] | .UserName + ":" + .AccessKeyId' ; done
Terminated instances sometimes are a problem. More about it.
aws ssm get-inventory --filters '[{"Key":"AWS:InstanceInformation.InstanceStatus","Values":["terminated"],"Type":"NotEqual"}]'
One more tip is to send huge jsons to gron. Grep Json.
aws ssm start-session --target INSTANCE_NAME
k get events --sort-by='.lastTimestamp'
cp ~/.kube/config ~/.kube/config.bak && KUBECONFIG=~/.kube/config:./ok-cluster/ok-cluster-eks-a-cluster.kubeconfig kubectl config view --flatten > /tmp/config && mv /tmp/config ~/.kube/config
aws eks --profile ok-dev update-kubeconfig --name eks-terra --alias ok-eks
for p in $(aws configure list-profiles) ; do for c in $(aws eks --profile $p list-clusters | jq '.clusters[]' | tr -d '\"') ; do echo $p $c ; aws eks --profile $p update-kubeconfig --name $c --alias $p:$c ; done ; done
kubectl patch pv $PV_NAME_i -p \
'{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
for n in $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ; do lb="" ; for a in $(kubectl label --list nodes $n | sort | grep -e NodeType -e lifecycle | cut -d= -f 2) ; do lb="${lb}$a" ; done ; kubectl label nodes $n node-role.kubernetes.io/$lb= ; done
kubectl debug -it POD --image=IMAGE_WITH_TOOLS --target=CONT --share-processes
SELECT CONCAT('CALL mysql.rds_kill(',id,');')
FROM information_schema.processlist
WHERE user='UGLY_BASTARD';
SHOW OPEN TABLES WHERE In_use > 0;
SHOW ENGINE INNODB STATUS;
Skip replication errors, read error-logs before skipping. It is important to understand what you are skipping.
CALL mysql.rds_skip_repl_error;
export KFK=KFK_HOST
for t in $(./bin/kafka-topics.sh --bootstrap-server $KFK:9092 --list) ; do ./bin/kafka-topics.sh --bootstrap-server $KFK:9092 --topic $t --delete ; done
#-||- --describe | grep 'ReplicationFactor:1' ; done
terraform state pull | inframap generate --connections=false | dot -Tpng > ~/Downloads/schema.png
(kubectl proxy --accept-hosts '.*' &) ; docker run -it -p 8080:8080 -e CLUSTERS=http://docker.for.mac.localhost:8001 hjacobs/kube-ops-view