Tips for living comfortably in Unix shell
Partially borrowed from The art of command line
- Processing files and data
- System administration
- Working with disk
- Manage processes
- Network
- Bash
- Crypto
- Git
- Proxy
- Miscellaneous
-
cd -
- cd to$(OLDPWD)
-
rm -rf ./* ./.*[!.]* ./...*
- recursively remove all files in the current dir including hidden -
ln [–s] target [linkname]
– make [symbolic] links between files/dirs -
ln -snf /path/to/target-directory linkname
- THE RIGHT way to create a symlink to a directory (NOTln -sf
which, provided 'linkname' already exists, would create the 'target-directory' symlink inside /path/to/target-directory/ pointing to /path/to/target-directory) -
ls –lia
– list files with symlinks and hardlinks (hardlinks are files having the same index) -
diff /etc/hosts <(ssh somehost cat /etc/hosts)
–compare local /etc/hosts with a remote one -
find /usr/lib –iname ‘libstdc*’
- ignore case -
find ./dir1 ./dir2 –name "*.cpp"–or –name "*.h" | xargs cat | wc –l
- calculate LOC -
find ./ -name "*.h" | xargs egrep -H "^class[ ]*Thread"
- search for declarations of Thread class -
find ./ -type d -path "*.svn" -prune | xargs rm -rf
- clean up .svn dirs: -
find ./ -name configure | xargs svn propset svn:executable yes
- set svn:executable flag on all configure scripts -
find ./ -name configure | xargs svn propdel svn:executable
- remove svn:executable flag from all configure scripts -
find /usr/local/lib/ -iname libicu*.so* -exec du -ks {} \; | cut -f1 | awk '{total=total+$1}END{print total/1024 " KB"}'
– find an aggregate size of all filed by mask -
find . -name "*.py" -exec grep -qI '\r\n' {} ';' -exec perl -pi -e 's/\r\n/\n/g' {} '+'
- fix CRLF to LF lineendings in all .py files in the current directory -
od –t x1 file
– print hex chars of file -
file myfile
orcat -e myfile
– test newline type used (CRLF/LF) -
perl -i -pe's/\r$//;' myfile
– replace CRLF -> LF for myfile -
perl -i -pe's/\r$//;' `find . | grep Makefile | xargs`
- replace CRLF -> LF for makefiles in the current dir recursively (useod –c <filename>
to test for CRLF) -
grep –RI KEEP_ALIVES_TIMEOUT /projects
recursively search for files with KEEP_ALIVES_TIME in ‘/projects’ skipping binary files Useful grep options: -
-C <num>
– show number of surrounding lines of match -
-A <num>
or–B <num>
– show a number of lines after or before the match -
echo '12.34.5' | egrep -o [0-9]+
- print each match on a separate line:12 34 5
-
Parsing space-delimited text
cat file | grep '[j]boss' | awk '{print $4}' cat file | awk '/[j]boss/ {print $4}' cat file | grep '[j]boss' | sed 's/\s\s*/ /g' | cut -d' ' -f4
-
Changing file in-place
sed -i -r 's/^[;]?display_errors\s*=.*$/display_errors = On/' /etc/php5/apache2/php.ini
-
locate something
- find a file anywhere by name, but bear in mind updatedb may not have indexed recently created files -
which
,whereis
,type
– locate the binary, source and manual page for a command -
file
– determine file type -
unison
– file synchronization tool (uses e.g. rsync) -
For general searching through source or data files (more advanced than grep -r), use
ag
. -
To convert HTML to text:
lynx -dump -stdin
-
For Markdown, HTML, and all kinds of document conversion, try
pandoc
. -
If you must handle XML,
xmlstarlet
is old but good. -
For JSON, use
jq
or pipe to 'python -m json.tool'. -
For Excel or CSV files,
csvkit
providesin2csv
,csvcut
,csvjoin
,csvgrep
, etc. -
For Amazon S3,
s3cmd
is convenient ands4cmd
is faster. Amazon's aws is essential for other AWS-related tasks. -
f you ever need to write a tab literal in a command line in Bash (e.g. for the -t argument to sort), press
ctrl-v [Tab]
or write$'\t'
(the latter is better as you can copy/paste it). -
For binary files, use
hd
for simple hex dumps andbvi
for binary editing. -
To split files into pieces, see
split
(to split by size) andcsplit
(to split by a pattern). -
Use
zless
,zmore
,zcat
, andzgrep
to operate on compressed files. -
To rename many files at once according to a pattern, use
rename
. For complex renames,reprenmay
help.rename 's/\.bak$//' *.bak
- Recover backup files foo.bak -> foorepren --full --preserve-case --from foo --to bar
- full rename of filenames, directories, and contents foo -> bar
-
cut
select portions of each line of a file (IMHO simpler and user-friendlier alternative toawk
)cut -d : -f 1,7 /etc/passwd
- extract login names and shells from the passwd(5)
-
tar –xvf somearchive.tar [-C out_dir]
– extract from somearchive.tar with verbose output [toout_dir
, which should already exist] -
tar –xjf simearchive.tar.bz2
– for bz2-compressed tars -
tar –xzf simearchive.tar.gz
– for gzip-compressed tars -
tar cvzf log.tgz /var/log
– create a compressed archive log.tgz from the directory /var/log -
wc
– calculate the number of bytes, newlines, words in a file -
stat
– view file statistics
gcc main.c >file
– stdout to filegcc main.c 2>file
– stderr to filegcc main.c 1>&2
- stdout to stderrgcc main.c 2>&1
- stderr to stdoutgcc main.c >& file
- stdout and stderr to file (bash)gcc main.c >file 2>&1
- stdout and stderr to file (ksh and bash)gcc main.c 2>&1 >file
– stderr to file, stdout to file (note the difference with the above)
last reboot
- information about the last rebootipcs -m
- information about shared memoryipcs -s
- information about existing semaphore sets.sysctl –a
- kernel configuration infoenv
– environment variables,uname –a
– print system info (kernel, hostname, OS etc)$ cat /etc/*-release
– information about Linux distributioncat /proc/version or dmesg | head -1
– pretty much the same but with Linux distributioncat /proc/cpuinfo
- CPU infocat /proc/meminfo
– memory infocat /proc/loadavg
– system loadvmstat
– virtual memory, CPU etcfree
- memory usage information.hostname
Print the name of the local machine (host).stat
– info about file system (files, dirs)lsmod
– list of loaded modulesldd <binary>
– print shared library dependenciesldconfig, ld.so
– configure the location of dynamic libsnm
,objdump
,ldd
,readelf
- inspecting binaries (export/import symbols, dependant libraries etc)export LD_DEBUG=symbols; ./myapp
– run myapp displaying shared libs symbols resolution progressid
- show current user access rightswhoami
- your login namewho
- list the users logged into the machine.w
– show who is logged and what they are doinglast
– show listing of last logged users (is taken from /var/log/wtmp)rwho -a
- list all users logged into the network.uptime
- the amount of time since the last rebootwrite user [tty]
– send text message to a logged user on the same machinepasswd
– change passwordadduser <username>
- add new user (preferred to useradd)adduser <username> sudo
– add existing user to sudo group on Debian/Ubuntu. The change will take effect the next time the user logs inusermod -aG wheel <username>
followed byvisudo
and uncommenting%wheel ALL=(ALL) ALL
line – add existing user to sudo group on CentOS/RHEL. The change will take effect the next time the user logs infor s in /etc/rc$(runlevel | awk '{ print $2}').d/*; do basename $s | grep '^S' | sed 's/S[0-9].//g' ;done | sort
– list services started on boot on Debian. As an alternative installsysv-rc-conf
package. On CentOS usechkconfig
- For a more in-depth system overview, use
glances
. It presents you with several system level statistics in one terminal window. Very helpful for quickly checking on various subsystems. dpkg -S `which program-name`
- check which packageprogram-name
comes from
sudo apt-get clean autoclean
sudo apt-get autoremove --purge -y
sudo /usr/bin/purge-old-kernels -y
sudo journalctl --vacuum-time=2d
sudo apt install -y ncdu
sudo ncdu /
Get-ChildItem -Path "$env:TEMP" -Directory -recurse |where {$_.LastWriteTime -le $(get-date).Adddays(-30)} | Remove-Item -recurse -force
- remove all files and foldert from %tmp% older than 30 days (run powershell as an admin)
system V | systemd equivalent | description |
---|---|---|
service foobar start |
systemctl start foobar.service |
start a service |
service foobar stop |
systemctl stop foobar.service |
stop a service |
service foobar restart |
systemctl restart foobar.service |
stop and then start a service |
service foobar reload |
systemctl reload foobar.service |
when supported, reloads the config file without interrupting pending operations |
service foobar status |
systemctl status foobar.service |
tells whether a service is currently running |
ls /etc/rc.d/init.d/ |
ls /lib/systemd/system/*.service /etc/systemd/system/*.service |
list the services |
service --status-all |
systemctl list-units |
list the services |
update-rc foobar defaults |
systemctl enable foobar.service | enables the service to start on boot |
update-rc foobar remove |
systemctl disable foobar.service |
disables the service to start on boot |
? | systemctl is-enabled foobar.service |
check if a service is currently configured to start on boot |
? | systemctl is-active <service-name> |
check if a service is currently active (running). |
? | systemctl show <service-name> |
show all the information about the service. |
dstat
=vmstat
+iostat
+ifstat
iostat
– brief system disk statistics
du –hs /www
- output the total size of /www folder in a human-readable formatdf -h
- file system space usagencdu
- very handy disk usage tool; tells you why a disk is full, saves time over the usual commands likedu -sh *
hdparm -ftT /dev/hda
– retrieve disk speed informationsudo apt-get autoremove --purge -y
free some disk space by removing unused dependenciesmke2fs -j /dev/<drive-device>
- format with ext3mkfs -t ext4 /dev/<drive-device>
- format with ext4mount –a
- process/etc/fstab
, however skipping lines with‘noauto’
keyword.
In order to add new currently mounted points to /etc/fstab
, use /etc/mtab which contains list of currently mounted devices in fstab
format
- Add physical disk space
- Add disk partition (fdisk or parted)
sudo fdisk /dev/sdb
- Inspect current partition layout (
p
) - Inspect partition range for the newly added disk space (
F
) - Create a new primary partition (
n
andp
) - Apply the changes (
w
)
- Format disk partition
sudo mkfs -t ext4 /dev/sdb1
- Mount the disk partition
sudo mkdir -p /path/to/new/disk
sudo mount -t ext4 /dev/sdb1 /path/to/new/disk
In order to make your changes persistent it is strongly suggested to check /etc/mtab for the correct configuration line to be appended to /etc/fstab. This is to avoid mistyping /etc/fstab which will make the system non-bootable.
- Should the reason for adding a new disk is a lack of space on disk and you feel like moving the contents of the entire directory to the new added disk, you should do it in steps: mount the new partition under a temporary location, copy your to this partition and finally remount the partition with the original directory path.
For example you notices that you disk is full because /var/lib/docker takes too much space, so you feel like moving this to a new disk. You do it in steps:
- stop docker
- move to contents of /var/lib/docker somethere e.g. to /var/lib/docker-bak
- mount the new added disk as /var/lib/docker by adding to /etc/fstab
/dev/sdb1 /var/lib/docker ext4 defaults 1 1
- reboot
- stop docker
- move the backed up data back to the /var/lib/docker and discard the backup
- Inventorize your current disk layout:
pvs
vgs
lvs
- Add physical disk space
- Add disk partition (fdisk)
fdisk /dev/sda
- Inspect current partition layout (
p
) - Inspect partition range for the newly added disk space (
F
) - Create a new primary partition (
n
andp
) - Change the partition type to Linux LVM (
t
and8e
) - Apply the changes (
w
) - Restart once completed
- Extend volume group with the added partition (imagine the partition you just added is
/dev/sda3
and your LVM volume group reported byvgs
isubuntu16-vg
)
vgextend ubuntu16-vg /dev/sda3
- Extend logical volume with the added partition
lvextend /dev/ubuntu16-vg/root /dev/sda3
resize2fs /dev/ubuntu16-vg/root
dstat
=vmstat
+iostat
+ifstat
htop
- similar totop
, but is better (i.e. shows correct CPU timings for multithreaded programs which uses NPL threads; also more user-friendly etc)ps aux
– obtain process listps auxww
– with wide output (matters when the line does not fit the window width)ps auxf
– with tree
ps –eLf
– info about threadsps -eLo pid,ppid,lwp,%cpu,%mem,vsize,rssize
– info per thread with CPU/memory usage
ps –o pid,cmd –ppid <ppid>
- get processes having parentpstree –p
– display process treecat /proc/self
– info about selfcommand &
run command in the backgroundCtrl-z + bg
- interactively move the current foreground process to the backgroundkill <pid>
- try to kill the process with SIGTERM.kill -9 <pid>
- kill the process with SIGKILL, unlike SIGTREM the SIGKILL cannot be caught by a process.killall <name>
- kill all processes with the specified name.kill –s 0 <pid>
- check the existence of a process with . Cannot be sent to system processes (such as 1 [init]). In this case simpy useps –p <pid> -o pid=
sudo kill –HUP 1
- tell‘init’
that it should re-read/etc/inittab
pkill –f mask
– kill all processes mathing patternpidof
– get pid of the running programfuser
– identify processes using files and socketsnice <program> <level>
- run program with niceness level. Specifying a high nice level will make the program run with a lower priority./proc/mounts
==/etc/mtab
– mountsnohup <command>
- runs the given command with hangup signals ignored, so that the command can continue running in the background after you log out.- E.g. you remotely login to the server, then give
%ssh <some_server> -l <username>
%nohup <some_long_executing_program> &
%logout
watch
- execute a program periodically with fullscreen output. For example:watch tail –n 25 /tmp/myprog.log
will periodically print last 25 lines of/tmp/myprog.log
gdb <program> <pid>
– attach to process pid associating with program executablegdb <program> <core>
– debug core file core associating with program executabletime <command>
– executes command and displays its resource usage after it finishesstrace
– trace system calls and signals E.g. strace ./myprog will execute program and intercept all its system calls abd signals.ltrace
– library call tracer (like strace for system calls)gcov
– code coverage toolgprof
– profiling tool
- For web debugging,
curl
especiallycurl -I
,wget
, and the more modernhttpie
dstat
=vmstat
+iostat
+ifstat
netstat -tlnp
- show all TCP listening socketsnetstat –tanp
- show all TCP listening sockets and TCP sockets with established connectionsnetstat –anp
- show all (TCP and UDP) listening sockets and sockets with established connectionsnmap
– network exploration tool and security scanner (e.g. ports scanner)xprobe (xprobe2)
– OS fingerprint scanner (guesses OS version)finger
– look up users of (remote) OSrpcinfo
– reports rpc information of the (remote) hostnetcat $ip $port < /dev/zero
– send stream of zeroes to the server (might be useful for testing)echo "hello from server" | netcat -l -p 443
- simple server, can be checked with telnetpython -m SimpleHTTPServer 80
- simple webserver for testinglsof -i TCP:1234
– who is listeinng port 1234.fuser
– identify processes using files and socketshost [pcname]
– DNS lookup (of pcname).nslookup
– query Internet domain name servers (DNS). Most implementations of nslookup do not look at/etc/hosts
, they only query domain name servers.tcpdump
– console sniffertcpdump tcp port 80
tcpdump -X -i lo0 tcp port 1235
– sniff on lo0:1235 and print packets payload
whois
– submit whois querytcpkill
- kill connections to or from a particular host, network, port, or combination of all.- Use
mtr
as a better traceroute, to identify network issues- To find which socket or process is using bandwidth, try
iftop
ornethogs
. - The
ab
tool (comes with Apache) is helpful for quick-and-dirty checking of web server performance. For more complex load testing, trysiege
. - For more serious network debugging,
wireshark
,tshark
, orngrep
.
- To find which socket or process is using bandwidth, try
minicom
– serial port console clientdig +short myip.opendns.com @resolver1.opendns.com
- resolve your external IP
!< num>
- execite the command number num from the history listCtrl + r
– search history in reverse order, pressCtrl + r
to search furtherCtrl + a
– go to the start of the command lineCtrl + e
– go to the end of the command lineCtrl + k
– delete from cursor to the end of the command lineCtrl + u
– delete from cursor to the start of the command lineCtrl + w
– delete the word before the cursorCtrl + y
– paste word or text that was cut using one of the deletion shortcuts (such as the one above) after the cursorAlt + b
– move backward one word (or go to start of word the cursor is currently on)Alt + f
– move forward one word (or go to end of word the cursor is currently on)shopt –s dotglob
– enable visibility of hidden files in bash shell- To go to a new line in shell hit
Enter
after typing\
-r file
- Check if file is readable.-w file
- Check if file is writable.-x file
- Check if we have execute access to file.-f file
- Check if file is an ordinary file (as opposed to a directory, a device special file, etc.)-s file
- Check if file has size greater than 0.-d file
- Check if file is a directory.-e file
- Check if file exists. Is true even if file is a directory
if [ -f "$file" ] ; then
echo $file exists
fi
[ "$s1" = "$s2" ]
- Check if s1 equals s2.[ "$s1" != "$s2" ]
- Check if s1 is not equal to s2.[ -z "$s1" ]
- Check if s1 has size 0.[ -n "$s1" ]
- Check if s1 has nonzero size.[ "$s1" ]
- Check if s1 is not the empty string.[[ "$s1" < "$s2" ]] or [ "$s1" \< "$s2" ]
- Check if s1 is less than s2 in alphabetical order- Checking using regex:
re='some REGEX'
if [[ $foo =~ $re ]]
...
[ "$n1" -eq "$n2" ]
or((n1 == n2))
- Check to see if n1 equals n2[ "$n1" -ne "$n2" ]
or((n1 != n2))
- Check to see if n1 is not equal to n2.[ "$n1" -lt "$n2" ]
or((n1 < n2))
- Check to see if n1 < n2.[ "$n1" -le "$n2" ]
or((n1 <= n2))
- Check to see if n1 <= n2.[ "$n1" -gt "$n2" ]
or((n1 > n2))
- Check to see if n1 > n2.[ "$n1" -ge "$n2" ]
or((n1 >= n2))
- Check to see if n1 >= n2.
for ((i=1; i<=n; i++)); do
...
done
https://gist.github.com/kindkaktus/11d7005ddbf955772dbb
echo '$1$2hello'
- Writes literally $1$2hello on screen.echo "$1$2hello"
- Writes value of parameters 1 and 2 and string hello
v=' one two
three '
echo $v # will replace all whitespaces with a single space and output one two three
echo "$v" # will print the value of $v as is
v="*.sh"
echo $v # will print test1.sh test2.sh
echo "$v" # will print *.sh
if [ $foo -ge 3 -a $foo -lt 10 ]; then
if [ $my_error_flag -eq 1 ] || [ $my_error_flag_o -eq 2 ]; then
if [ $my_error_flag -eq 1 ] || [ $my_error_flag_o -eq 2 ] || ([ $my_error_flag -eq 1 ] && [ $my_error_flag_o -eq 2 ]); then
if [ -f /var/run/reboot-required -o -f /var/run/reboot-required.pkgs ]; then
if [[ $num -eq 3 && "$stringvar" == "foo" ]]; then
if [[ $num -eq 3 -a "$stringvar" == "foo" ]]; then
if [[ -f /var/run/reboot-required || -f /var/run/reboot-required.pkgs ]]; then
if (((A == 0 || B != 0) && C == 0)); then
- for arithmetic expressions
i=$(( (i + 1) % 5 ))
[ -f ./file ] || { echo "The file does not exist"; touch ./file; }
- notice colon ; at the end of the expression inside {..}
set -e
bad_func() { return 1; }
func()
{
bad_func
echo "Bad function is called"
}
if ! func ; then
echo "SURPRISE We don't get here!"
fi
func || echo "SURPRISE! We don't get here!"
func
echo "CORRECT! We don't get here!"
set -e
bad_func() { return 1; }
func1()
{
local var=$(bad_func)
echo "SURPRISE We get here!"
}
func2()
{
local var
var=$(bad_func)
echo "CORRECT! We never get here!"
}
VAR=$(bad_func)
echo "CORRECT! We never get here!"
foo() {...}
- ok and portablefunction doo() {...}
- ok in bash but not widely portable
Correct:
for x in "$@"; do
echo "parameter: '$x'"
done
Also correct:
for x; do
echo "parameter: '$x'"
done
Not correct:
for x in $*; do
echo "parameter: '$x'"
done
${parameter-default}
If parameter not declared, use default${parameter:-default}
If parameter not declared or is null, use default
variable= #declare variable and set it to null.
echo "${variable-0}" # no output
echo "${variable:-1}" # 1
unset variable # variable is not declared
echo "${variable-2}" # 2
echo "${variable:-3}" # 3
-
Checking a variable exists:
${name:?error message}
- For example to fetch an argument in Bash script that requires a single argument only
arg=${1:?usage: $0 input_file}
- For example to fetch an argument in Bash script that requires a single argument only
-
${var#Pattern}
- Remove fromvar
the shortest part ofPattern
that matches the front ofvar
. -
${var##Pattern}
- Remove fromvar
the longest part ofPattern
that matches the front ofvar
. -
${var%Pattern}
- Remove fromvar
the shortest part ofPattern
that matches the back ofvar
. -
${var%%Pattern}
- Remove fromvar
the longest part ofPattern
that matches the back ofvar
. -
${var/Pattern/Replacement}
- First match ofPattern
, within var replaced withReplacement
. IfReplacement
is omitted, then the first match ofPattern
is replaced by nothing, that is, deleted. -
${var//Pattern/Replacement}
- All matches ofPattern
, within var replaced withReplacement
. IfReplacement
is omitted, then all occurrences ofPattern
are replaced by nothing, that is, deleted. -
${var/#Pattern/Replacement}
If prefix ofvar
matchesPattern
, then substituteReplacement
forPattern
. -
${var/%Pattern/Replacement}
If suffix ofvar
matchesPattern
, then substituteReplacement
forPattern
. -
${0%/*.*}
– retrieves script directory name (same as$(dirname $0)
, but much faster) -
${0##/*/}
– retrieves script base name (same as$(basename $0)
, but much faster)
for file in "file1 file2 /var/log/*.log"
do
[ -f "$file" ] || continue
cp $file /tmp
done
Notice: [ -f "$file" ] check is necessary because if there are no files matching /var/log/*.log
, the pattrern itself will be substituted for cp
which will produce error: cp: /var/log/*.log: No such file or directory
Another correct way to copy files by mask is:
find . -type f -exec some command {} \;
WRONG way to copy files by mask (though used very often):
for i in $(ls *.mp3); do # WRONG because of word splittling (file names with spaces), globbing and because `ls` may corrupt file names
some command "$i"
done
source <file>
- include another file. <file>
– include another file; the dot-syntax is more portablectrl-r
- search through command historyctrl-w
- to delete the last wordctrl-u
- to delete the whole line.alt-b
andalt-f
to move by wordctrl-k
to kill to the end of the line(cd somedir || exit; some-command)
- do something insomedir
dir, continue in the current dir after the subshell finishesset -x
- enable debugging of bash script
openssl x509 -noout -text -in cert.pem
– view cert info (show only the first cert)openssl x509 -noout -text -fingerprint -sha1 -in cert.pem
– view cert info including its sha1 fingerptintopenssl x509 -purpose -in cert.pem –noout
– view effective cert purposes (show only the first cert)openssl x509 -outform der -in cert.pem -out cert.der
- convert PEM to DERopenssl x509 -inform der -in cert.der -out cert.pem
- convert DER to PEMopenssl crl2pkcs7 -nocrl -certfile certs.pem | openssl pkcs7 -print_certs -text -noout
- view cert info (show all certs found in certs.pem)openssl smime -sign -in text.txt -signer signingcertkey.pem -inkey signingcertkey.pem -out signed.pkcs7.smime
– SMIME signopenssl smime -verify -in signed.pkcs7.smime -CAfile signingcertca.pem
– verify SMIME-signed message against the issuer CAopenssl smime -verify -in message -noverify -signer cert.pem
– extract cert from SMIME-signed message to cert.pemopenssl rsa -in privateKey.pem -out newPrivateKey.pem
– remove passphrase from RSA private keyopenssl rsa -in private.key -inform PEM -out private-rsa.key -outform PEM
- convert PKCS#8 private key (i.e. the one withBEGIN PRIVATE KEY
header) to PKCS#1 RSA private key (i.e. the one withBEGIN RSA PRIVATE KEY
header)openssl pkcs12 -nodes -in file.pfx -out file.pem
– extract all from PKCS#12 packageopenssl pkcs12 -export -out certkey.pfx -inkey key.pem -in cert.pem
- create PKCS#12 packageopenssl verify -CAfile ca.pem cert.pem
- verify that the given certificate is issued by the given CAecho –n "some text" | openssl base64 –e
- base64 encodeecho "ABCDEF==" | openssl base64 –d
– base64-decodeecho -n "text" | md5sum
- calculate MD5 digest of the fileecho –n "text" | uuencode –m /dev/stdout
- base64-encodehtpasswd [–c] passwd_file username
- generate Apache password for username and store it to passwd_file.–c
option is used to create a new passwd-file instead of adding lines to an existing one.echo -n | openssl s_client -showcerts -connect github.com:443 2>/dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/local/share/ca-certificates/DigiCert-CA.crt && update-ca-certificates
- quick install github CA certificates to the trusted store
cat signingca.pem signingkey.pem rootca.pem > signingcacertkey.pem
openssl x509 -req -in certreq.p10 -sha256 -extfile openssl.cnf -extensions usr_cert -CA signingcacertkey.pem -CAkey signingcacertkey.pem -CAcreateserial -out cert.pem -days 365
# produce also PKCS#7 cert
openssl crl2pkcs7 -nocrl -certfile cert.pem -out cert.p7b -certfile signingcacertkey.pem
openssl x509 -x509toreq -in cert.pem -signkey key.pem -text | sed -ne '/-BEGIN CERTIFICATE REQUEST-/,/-END CERTIFICATE REQUEST-/p' > recovered.csr
git checkout --ours /path/to/conflict/file
git add /path/to/conflict/file
git rebase --continue
git rebase -Xtheirs origin/master
git rebase -Xours origin/master
Add repository as git subtree
git remote add pretty-python-remote https://github.com/kindkaktus/PrettyPython
git fetch pretty-python-remote
git read-tree --prefix=Software/Import/PrettyPython -u pretty-python-remote/master
git commit -a -m"Add PrettyPython library as a subtree from https://github.com/kindkaktus/PrettyPython"
git push
... and later on, incorporate upstream changes into our repo
git fetch pretty-python-remote
git pull -s subtree --no-edit pretty-python-remote master
git push
List subtrees merged to your project:
git log | grep git-subtree-dir | tr -d ' ' | cut -d ":" -f2 | sort | uniq
git checkout -B master origin/master
git merge --no-ff --no-commit origin/feature
git diff master
git commit -a
git push
git branch -D unneeded-branch
- delete local branchgit push origin --delete unneeded-branch
- delete remote branchgit fetch -p
- prune remote-tracking branches no longer on remote
git config --global alias.lg "log --color=auto --graph --abbrev-commit --decorate --date=relative --format=format:'%C(bold blue)%h%C(reset) - %C(green)(%ar)%C(reset) %C(white)%s%C(reset) %C(bold black)- %an%C(reset)%C(bold yellow)%d%C(reset)'"
git config --global alias.lg2 "log --color=auto --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(cyan)%aD%C(reset) %C(green)(%ar)%C(reset)%C(bold yellow)%d%C(reset)%n'' %C(white)%s%C(reset) %C(bold black)- %an%C(reset)' --all"
git config --global status.color "auto"
git config --global color.status.added "green"
git config --global color.status.changed "bold blue"
git config --global color.status.untracked "magenta"
git config --global color.status.deleted "red"
git config --global alias.ci "commit"
git config --global alias.st "status"
git config --global alias.di "diff"
- Reset master branch to the commit in the master branch before the merge
git cherry-pick -m 1 <sha-of-the-merge-commit>
- Now just add remaining commits e.g. by cherry picking them, reshuffling them as you wish
Diff commited file to the previous commit:
git diff HEAD@{1} filename
Diff between the current and the previous commit, ignoring whitespace:
git diff -w HEAD^
Diff between the current and the previous commit, file names only:
git diff HEAD^ --name-status
Revert local modifications to a file
git checkout filename
Revert all local modifications
git checkout -f
Checkout remote branch overwriting a local branch
git checkout -B my-branch origin/my-branch
Combine two last commits into one
git reset --soft "HEAD^ && git commit --amend --no-edit
Set commit date
git commit --date "13 Sep 2018 21:03 CET"
Oh shit, I accidentally committed something to master that should have been on a brand new branch!
# create a new branch from the current state of master
git branch new-branch-name
# remove the commit from the master branch
git reset HEAD~ --hard
git checkout new-branch-name
Oh shit, I accidentally committed to the wrong branch!
git reset HEAD~ --soft
git stash
# move to the correct branch
git checkout name-of-the-correct-branch
git stash pop
git add . # or add individual files
git commit -m "your message here"
another way is to use cherry-pick
git checkout name-of-the-correct-branch
git cherry-pick master
git checkout master
git reset HEAD~ --hard
Duplicate repo including all branches and tags
git clone --bare <original-repo-url> <clone-dir>
cd <clone-dir>
git push --mirror <new-repo-url>
Check this out The seven rules of a great git commit message
Cleanup all stopped containers and untagged images
docker rm $(docker ps -a -q)
docker rmi $(docker images | grep "^<none>" | awk '{print $3}')
Remove all unused containers, networks, images (both dangling and unreferenced), and volumes.
docker system prune -a -f
How to set up a SOCKS proxy server and proxy traffic from browser on Windows and from git client on *nix
Just make sure you have ssh daemon up and running. That's the nice thing of SOCKS proxy, once you have sshd running, there is no need for more configuration server-side.
Create ssh session in Putty with hostname and ssh port of your proxy server. Under menu Connection -> SSH -> Tunnels add source port (say, 1337), and destination "dynamic". Open this session, enter login credentials and leave the session open In your browser (Firefox/Chrome) just specify SOCK5 server localhost and port 1337
Setup ssh tunnel to your proxy my-proxy.org:2222
ssh -D 1337 -f -C -q -N -p 2222 your-username@my-proxy.org
enter username and password when prompted
When accessing git repo via ssh
protocol e.g. ssh://git@my-repo.com/my-product.git on Linux
Add to ~/.ssh/config: (make sure to install the program specified via ProxyCommand)
Host my-repo.com
User git
ProxyCommand connect-proxy -S localhost:1337 %h 2222
alternatively to connect-proxy
you may use socat
or tsocks
.
git config --global http.proxy socks5://localhost:1337
git config --global core.gitproxy "git-proxy"
git config --global socks.proxy "localhost:1337"
for more info:
- http://cms-sw.github.io/tutorial-proxy.html
- https://www.digitalocean.com/community/tutorials/how-to-route-web-traffic-securely-without-a-vpn-using-a-socks-tunnel#step-4-(mac-os-xlinux)-—-creating-shortcuts-for-repeated-use
date MMDDhhmmYYYY
- set datedate -u +%s
- get UTC as Unix timestampdate $(date +%m%d%H%M%Y.%S -d '4 seconds')
- add 4 seconds to the current timentpd -gq
– set time and exituuidgen
– generated uuidscreen
- screen window manager that multiplexes a physical terminal between several processes. Useful e.g. when having multiple screens per one ssh connectiongrabserial
- reads a serial port and writes the data to standard output. Useful e.g. to measure system boot time (-t
option)echo $?
– exit code of the last executed program
Taken from here
To best share with multiple users who should be able to write in /var/www
, it should be assigned a common group. For example the default group for web content on Ubuntu and Debian is www-data
.
-
Make sure all the users who need write access to
/var/www
are in this group.sudo usermod -a -G www-data phpadmin
-
Give
www-data
group ownership of/var/www
:sudo chgrp -R www-data /var/www
-
Give
www-data
group write permissions on/var/www
:sudo chmod -R g+w /var/www
-
It is also recommended that you set setgid on
/var/www
to have all files created under/var/www
owned by thewww-data
group.sudo find /var/www -type d -exec chmod g+s {} \;
Notice that it it not possible set setuid on /var/www
so that all new files created under /var/www
owned by the phpadmin
user (only possible on FreeBSD). The best you can do is to give all existing files in /var/www
read and write permission for owner and group:
sudo find /var/www -type f -exec chmod ug+rw {} \;
You might have to log out and log back in to be able to make changes if you're editing permission for your own account.
usermod --home /var/www/ ftpuser
then set required permission for ftpuser on /var/www/
if needed (see the above section about apache)
Edit /etc/vsftpd/vsftpd.conf
:
chroot_local_user=YES
and restart vsftpd
tail -10000 /var/log/apache2/access.log | awk '{print $1}' | sort | uniq -c | sort -n | tail
tail -10000 /var/log/apache2/access.log | awk '{print $12}' | sort | uniq -c | sort -n | tail
apt-get changelog <package>
- for Debian/Ubunturpm -q --changelog <package> | head
- for CentOS
apt-get --purge -y remove mysql-server mysql-common mysql-client
apt-get autoremove
rm -rf /etc/mysql /var/lib/mysql*
apt-get install -f mysql-server
systemctl start mysql
systemctl status mysql
apt-get install -f mysql-client
... possibly reinstall MySQL development bindings e.g.
apt install -y libmysql++-dev