You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If there are no duplicates, all of the counts produced by uniq will be 1. Sort the results numerically from high to low, and any counts greater than 1 will appear at the top of the output:
Redirecting output from a command to a file and monitoring the output in real-time:
Method 1
stdbuf -oL <command> | tee log_file.txt
Method 2
<command> | tee >(cat)
Method 3
unbuffer <command> | tee log_file.txt
LVM CHEATSHEET # DO NOT confuse with LLVM (CLANG)
A small "cheatsheet" with basic LVM (Logical Volume Manager) commands to create and remove Physical Volumes (PVs), Volume Groups (VGs), and Logical Volumes (LVs), as well as perform resizing:
Creating a Physical Volume (PV):
pvcreate /dev/sdx
Creating a Volume Group (VG):
vgcreate vg_name /dev/sdx
Adding a Physical Volume (PV) to a Volume Group (VG):
lz4 -d rootfs_backup-*.tar.lz4
tar --xattrs-include='*.*' --numeric-owner -xpvf home_gentoober_backup-*.tar /
[SCRIPT] GENTOO MKSTAGE4:
#!/bin/bash
# Enable the globstar option to allow the use of ** for recursive file matching.
shopt -s globstar
# Set the backup_date variable to the current date (you can customize the date format as needed).
backup_date=$(date +"%d.%m.%Y")
backup_local="/mnt/backups/"
GREEN="\033[1;32m"
NC="\033[0m"
# Create a tar file with xattrs, excluding specific directories, and compress it with lz4.
tar cvpf - --xattrs --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/run/*,/mnt/*,/media/*,/home/gentoober/videos/*} / | lz4 -vz --fast - > "${backup_local}stage4-gentoober-${backup_date}.tar.lz4"
# Disable the globstar option.
shopt -u globstar
# Display a message.
echo -e "${GREEN}Stage4 completed and moved to ${backup_local}/${NC}"
head -c 4 /dev/urandom | od -N4 -tu4 | awk 'NR==1 {print $2 % 100000000 + 1}'
head -c4 /dev/urandom | od -A none -t x4
shuf -i 1-100000000 -n 1
WATCH PROCESSES
watch -n 30 <command>
PROCESS=<PID/process_name/program>
while ps aux | grep -v grep | grep "$PROCESS" > /dev/null; do sleep 30; done && echo "A execução de $PROCESS foi concluída."
msfconsole
use auxiliary/scanner/vnc/vnc_login
set rhosts 192.168.0.1
set pass_file /usr/share/wordlists/seclists/Passwords/500-worst-passwords.txt
run
HTML basic auth
echo admin >user.txt # Try only 1 username
echo -e "blah\naaddd\nfoobar" >pass.txt # Add some passwords to try. 'aaddd' is the valid one.
nmap -p80 --script http-brute --script-args \
http-brute.hostname=pentesteracademylab.appspot.com,http-brute.path=/lab/webapp/basicauth,userdb=user.txt,passdb=pass.txt,http-brute.method=POST,brute.firstOnly \
pentesteracademylab.appspot.com
openssl encode
openssl base64 < file
openssl decode (cut & paste the 1 line from above):
openssl base64 -d > file-COPY
xxd encode
xxd -p < file
xxd decode
xxd -p -r > file-COPY
File transfer - using WebDAV
cloudflared tunnel --url localhost:8080 &
# [...]
# +--------------------------------------------------------------------------------------------+
# | Your quick Tunnel has been created! Visit it at (it may take some time to be reachable): |
# | https://example-foo-bar-lights.trycloudflare.com |
# +--------------------------------------------------------------------------------------------+
# [...]
wsgidav --port=8080 --root=. --auth=anonymous
# Upload a file to your workstation
curl -T file.dat https://example-foo-bar-lights.trycloudflare.com
# Create a directory remotely
curl -X MKCOL https://example-foo-bar-lights.trycloudflare.com/sources
# Create a directory hirachy remotely
find . -type d | xargs -I{} curl -X MKCOL https://example-foo-bar-lights.trycloudflare.com/sources/{}
# Upload all *.c files (in parallel):
find . -name '*.c' | xargs -P10 -I{} curl -T{} https://example-foo-bar-lights.trycloudflare.com/sources/{}