I am glad that you are here! I was working on bioinformatics a few years ago and was amazed by those single-word bash commands which are much faster than my dull scripts, time saved through learning command-line shortcuts and scripting. Recent years I am working on cloud computing and I keep recording those useful commands here. Not all of them is oneliner, but i put effort on making them brief and swift. I am mainly using Ubuntu, Amazon Linux, RedHat, Linux Mint, Mac and CentOS, sorry if the commands don't work on your system.
This blog will focus on simple bash commands for parsing data and Linux system maintenance that i acquired from work and LPIC exam. I apologize that there are no detailed citation for all the commands, but they are probably from dear Google and Stackoverflow.
English and bash are not my first language, please correct me anytime, thank you. If you know other cool commands, please teach me!
Here's a more stylish version of Bash-Oneliner~
- Terminal Tricks
- Variable
- Grep
- Sed
- Awk
- Xargs
- Find
- Condition and Loop
- Math
- Time
- Download
- Random
- Xwindow
- System
- Hardware
- Networking
- Data Wrangling
- Others
Ctrl + n : same as Down arrow.
Ctrl + p : same as Up arrow.
Ctrl + r : begins a backward search through command history.(keep pressing Ctrl + r to move backward)
Ctrl + s : to stop output to terminal.
Ctrl + q : to resume output to terminal after Ctrl + s.
Ctrl + a : move to the beginning of line.
Ctrl + e : move to the end of line.
Ctrl + d : if you've type something, Ctrl + d deletes the character under the cursor, else, it escapes the current shell.
Ctrl + k : delete all text from the cursor to the end of line.
Ctrl + x + backspace : delete all text from the beginning of line to the cursor.
Ctrl + t : transpose the character before the cursor with the one under the cursor, press Esc + t to transposes the two words before the cursor.
Ctrl + w : cut the word before the cursor; then Ctrl + y paste it
Ctrl + u : cut the line before the cursor; then Ctrl + y paste it
Ctrl + _ : undo typing.
Ctrl + l : equivalent to clear.
Ctrl + x + Ctrl + e : launch editor defined by $EDITOR to input your command. Useful for multi-line commands.
Esc + u
# converts text from cursor to the end of the word to uppercase.
Esc + l
# converts text from cursor to the end of the word to lowercase.
Esc + c
# converts letter under the cursor to uppercase.!53!!
# run the previous command using sudo
sudo !!
# of course you need to enter your passwordRun last command and change some parameter using caret substitution (e.g. last command: echo 'aaa' -> rerun as: echo 'bbb')
#last command: echo 'aaa'
^aaa^bbb
#echo 'bbb'
#bbb
#Notice that only the first aaa will be replaced, if you want to replace all 'aaa', use ':&' to repeat it:
^aaa^bbb^:&
#or
!!:gs/aaa/bbb/
!cat
# or
!c
# run cat filename again# '*' serves as a "wild card" for filename expansion.
/b?n/?at #/bin/cat
# '?' serves as a single-character "wild card" for filename expansion.
/etc/pa*wd #/etc/passwd
# ‘[]’ serves to match the character from a range.
ls -l [a-z]* #list all files with alphabet in its filename.
# ‘{}’ can be used to match filenames with more than one patterns
ls {*.sh,*.py} #list all .sh and .py files$0 :name of shell or shell script.
$1, $2, $3, ... :positional parameters.
$# :number of positional parameters.
$? :most recent foreground pipeline exit status.
$- :current options set for the shell.
$$ :pid of the current shell (not subshell).
$! :is the PID of the most recent background command.
$DESKTOP_SESSION current display manager
$EDITOR preferred text editor.
$LANG current language.
$PATH list of directories to search for executable files (i.e. ready-to-run programs)
$PWD current directory
$SHELL current shell
$USER current username
$HOSTNAME current hostname
# foo=bar
echo "'$foo'"
#'bar'
# double/single quotes around single quotes make the inner single quotes expand variablesvar="some string"
echo ${#var}
# 11var=string
echo "${var:0:1}"
#s
# or
echo ${var%%"${var#?}"}var="some string"
echo ${var:2}
#me stringvar="0050"
echo ${var[@]#0}
#050{var/a/,}{var//a/,}#with grep
test="god the father"
grep ${test// /\\\|} file.txt
# turning the space into 'or' (\|) in grepvar=HelloWorld
echo ${var,,}
helloworldcmd="bar=foo"
eval "$cmd"
echo "$bar" # fooecho $(( 10 + 5 )) #15
x=1
echo $(( x++ )) #1 , notice that it is still 1, since it's post-incremen
echo $(( x++ )) #2
echo $(( ++x )) #4 , notice that it is not 3 since it's pre-incremen
echo $(( x-- )) #4
echo $(( x-- )) #3
echo $(( --x )) #1
x=2
y=3
echo $(( x ** y )) #8factor 50
# 50: 2 5 5seq 10|paste -sd+|bcawk '{s+=$1} END {print s}' filenamecat file| awk -F '\t' 'BEGIN {SUM=0}{SUM+=$3-$2}END{print SUM}'expr 10+20 #30
expr 10\*20 #600
expr 30 \> 20 #1 (true)# Number of decimal digit/ significant figure
echo "scale=2;2/3" | bc
#.66
# Exponent operator
echo "10^2" | bc
#100
# Using variables
echo "var=5;--var"| bc
#4grep = grep -G # Basic Regular Expression (BRE)
fgrep = grep -F # fixed text, ignoring meta-charachetrs
egrep = grep -E # Extended Regular Expression (ERE)
pgrep = grep -P # Perl Compatible Regular Expressions (PCRE)
rgrep = grep -r # recursivegrep -c "^$"grep -o '[0-9]*'
#or
grep -oP '\d'grep ‘[0-9]\{3\}’
# or
grep -E ‘[0-9]{3}’
# or
grep -P ‘\d{3}’grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
# or
grep -Po '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'grep -w 'target'
#or using RE
grep '\btarget\b'# return also 3 lines after match
grep -A 3 'bbo'
# return also 3 lines before match
grep -B 3 'bbo'
# return also 3 lines before and after match
grep -C 3 'bbo'grep -o 'S.*'grep -o -P '(?<=w1).*(?=w2)'grep -v bbo filenamegrep -v '^#' file.txtgrep "$myvar" filename
#remember to quote the variable!grep -m 1 bbo filenamegrep -c bbo filenamegrep -o bbo filename |wc -lgrep -i "bbo" filenamegrep --color bbo filenamegrep -R bbo /path/to/directory
# or
grep -r bbo /path/to/directorygrep -rh bbo /path/to/directorygrep -rl bbo /path/to/directorygrep 'A\|B\|C\|D'
grep 'A.*B'grep 'A.B'grep ‘colou?r’grep -f fileA fileBgrep $'\t'$echo "$long_str"|grep -q "$short_str"
if [ $? -eq 0 ]; then echo 'found'; fi
#grep -q will output 0 if match found
#remember to add space between []!grep -oP '\(\K[^\)]+'grep -o -w "\w\{10\}\-R\w\{1\}"
# \w word character [0-9a-zA-Z_] \W not word charactergrep -d skip 'bbo' /path/to/files/*sed 1d filenamesed 1,100d filenamesed "/bbo/d" filename
- case insensitive:
sed "/bbo/Id" filenamesed -E '/^.{5}[^2]/d'
#aaaa2aaa (you can stay)
#aaaa1aaa (delete!)sed -i "/bbo/d" filename# e.g. add >$i to the first line (to make a bioinformatics FASTA file)
sed "1i >$i"
# notice the double quotes! in other examples, you can use a single quote, but here, no way!
# '1i' means insert to first line# Use backslash for end-of-line $ pattern, and double quotes for expressing the variable
sed -e "\$s/\$/\n+--$3-----+/"sed '/^\s*$/d'
# or
sed '/^$/d'sed '$d'sed -i '$ s/.$//' filenamesed -i '1s/^/[/' filesed -e '1isomething -e '3isomething'sed '$s/$/]/' filenamesed '$a\'sed -e 's/^/bbo/' filesed -e 's/$/\}\]/' filenamesed 's/.\{4\}/&\n/g'sed -s '$a,' *.json > all.jsonsed 's/A/B/g' filenamesed "s/aaa=.*/aaa=\/my\/new\/path/g"sed -n '/^@S/p'sed '/bbo/d' filenamesed -n 500,5000p filenamesed -n '0~3p' filename
# catch 0: start; 3: stepsed -n '1~2p'sed -n '1p;0~3p'sed -e 's/^[ \t]*//'
# Notice a whitespace before '\t'!!sed 's/ *//'
# notice a whitespace before '*'!!sed 's/,$//g'sed "s/$/\t$i/"
# $i is the valuable you want to add
# To add the filename to every last column of the file
for i in $(ls);do sed -i "s/$/\t$i/" $i;donefor i in T000086_1.02.n T000086_1.02.p;do sed "s/$/\t${i/*./}/" $i;done >T000086_1.02.npsed ':a;N;$!ba;s/\n//g'sed -n -e '123p'sed -n '10,33p' <filenamesed 's=/=\\/=g'sed 's/A-.*-e//g' filenamesed '$ s/.$//'sed -r -e 's/^.{3}/&#/' fileawk -F $'\t'awk -v OFS='\t'a=bbo;b=obb;
awk -v a="$a" -v b="$b" "$1==a && $10=b" filenameawk '{print NR,length($0);}' filenameawk '{print NF}'awk '{print $2, $1}'awk '$1~/,/ {print}'awk '{split($2, a,",");for (i in a) print $1"\t"a[i]}' filenameawk -v N=7 '{print}/bbo/&& --N<=0 {exit}'ls|xargs -n1 -I file awk '{s=$0};END{print FILENAME,s}' fileawk 'BEGIN{OFS="\t"}$3="chr"$3'awk '!/bbo/' fileawk 'NF{NF-=1};1' file# For example there are two files:
# fileA:
# a
# b
# c
# fileB:
# d
# e
awk 'print FILENAME, NR,FNR,$0}' fileA fileB
# fileA 1 1 a
# fileA 2 2 b
# fileA 3 3 c
# fileB 4 1 d
# fileB 5 2 e# For example there are two files:
# fileA:
# 1 0
# 2 1
# 3 1
# 4 0
# fileB:
# 1 0
# 2 1
# 3 0
# 4 1
awk -v OFS='\t' 'NR=FNR{a[$1]=$2;next} NF {print $1,((a[$1]=$2)? $2:"0")}' fileA fileB
# 1 0
# 2 1
# 3 0
# 4 0awk '{while (match($0, /[0-9]+\[0-9]+/)){
\printf "%s%.2f", substr($0,0,RSTART-1),substr($0,RSTART,RLENGTH)
\$0=substr($0, RSTART+RLENGTH)
\}
\print
\}'awk '{printf("%s\t%s\n",NR,$0)}'# For example, seperate the following content:
# David cat,dog
# into
# David cat
# David dog
awk '{split($2,a,",");for(i in a)print $1"\t"a[i]}' file
# Detail here: http://stackoverflow.com/questions/33408762/bash-turning-single-comma-separated-column-into-multi-line-stringawk '{s+=$1}END{print s/NR}'awk '$1 ~ /^Linux/'awk ' {split( $0, a, "\t" ); asort( a ); for( i = 1; i <= length(a); i++ ) printf( "%s\t", a[i] ); printf( "\n" ); }'awk '{$6 = $4 - prev5; prev5 = $5; print;}'xargs -d\tls|xargs -L1 -p headecho 1 2 3 4 5 6| xargs -n 3
# 1 2 3
# 4 5 6
echo a b c |xargs -p -n 3xargs -t abcd
# bin/echo abcd
# abcd
find . -name "*.html"|xargs rm
# when using a backtick
rm `find . -name "*.html"`find . -name "*.c" -print0|xargs -0 rm -rfxargs --show-limits
# Output from my Ubuntu:
# Your environment variables take up 3653 bytes
# POSIX upper limit on argument length (this system): 2091451
# POSIX smallest allowable upper limit on argument length (all systems): 4096
# Maximum length of command we could actually use: 2087798
# Size of command buffer we are actually using: 131072
# Maximum parallelism (--max-procs must be no greater): 2147483647find . -name "*.bak" -print 0|xargs -0 -I {} mv {} ~/old
# or
find . -name "*.bak" -print 0|xargs -0 -I file mv file ~/oldls |head -100|xargs -I {} mv {} d1time echo {1..5} |xargs -n 1 -P 5 sleep
# a lot faster than:
time echo {1..5} |xargs -n1 sleepfind /dir/to/A -type f -name "*.py" -print 0| xargs -0 -r -I file cp -v -p file --target-directory=/path/to/B
# v: verbose|
# p: keep detail (e.g. owner)
ls |xargs -n1 -I file sed -i '/^Pos/d' filenamels |sed 's/.txt//g'|xargs -n1 -I file sed -i -e '1 i\>file\' file.txtls |xargs -n1 wc -lls -l| xargsecho mso{1..8}|xargs -n1 bash -c 'echo -n "$1:"; ls -la "$1"| grep -w 74 |wc -l' --
# "--" signals the end of options and display further option processingls|xargs wc -lcat grep_list |xargs -I{} grep {} filenamegrep -rl '192.168.1.111' /etc | xargs sed -i 's/192.168.1.111/192.168.2.111/g'find .find . -type ffind . -type dfind . -name '*.php' -exec sed -i 's/www/w/g' {} \;
# if there are no subdirectory
replace "www" "w" -- *
# a space before *find mso*/ -name M* -printf "%f\n"find / -type f -size +4Gfind . -name "*.mso" -size -74c -delete
# M for MB, etcfind . -type f -empty
# to further delete all the empty files
find . -type f -empty -delete# if and else loop for string matching
if [[ "$c" == "read" ]]; then outputdir="seq"; else outputdir="write" ; fi
# Test if myfile contains the string 'test':
if grep -q hello myfile; then …
# Test if mydir is a directory, change to it and do other stuff:
if cd mydir; then
echo 'some content' >myfile
else
echo >&2 "Fatal error. This script requires mydir."
fi
# if variable is null
if [ ! -s "myvariable" ]
#True of the length if "STRING" is zero.
# Test if file exist
if [ -e 'filename' ]
then
echo -e "file exists!"
fi
# Test if file exist but also including symbolic links:
if [ -e myfile ] || [ -L myfile ]
then
echo -e "file exists!"
fi
# Test if the value of x is greater or equal than 5
if [ "$x" -ge 5 ]; then …
# Test if the value of x is greater or equal than 5, in bash/ksh/zsh:
if ((x >= 5)); then …
# Use (( )) for arithmetic operation
if ((j==u+2))
# Use [[ ]] for comparison
if [[ $age -gt 21 ]]# Echo the file name under the current directory
for i in $(ls); do echo file $i;done
#or
for i in *; do echo file $i; done
# Make directories listed in a file (e.g. myfile)
for dir in $(<myfile); do mkdir $dir; done
# Press any key to continue each loop
for i in $(cat tpc_stats_0925.log |grep failed|grep -o '\query\w\{1,2\}');do cat ${i}.log; read -rsp $'Press any key to continue...\n' -n1 key;done
# Print a file line by line when a key is pressed,
oifs="$IFS"; IFS=$'\n'; for line in $(cat myfile); do ...; done
while read -r line; do ...; done <myfile
#If only one word a line, simply
for line in $(cat myfile); do echo $line; read -n1; done
#Loop through an array
for i in "${arrayName[@]}"; do echo $i;done
# Column subtraction of a file (e.g. a 3 columns file)
while read a b c; do echo $(($c-$b));done < <(head filename)
#there is a space between the two '<'s
# Sum up column subtraction
i=0; while read a b c; do ((i+=$c-$b)); echo $i; done < <(head filename)
# Keep checking a running process (e.g. perl) and start another new process (e.g. python) immediately after it. (BETTER use the wait command! Ctrl+F 'wait')
while [[ $(pidof perl) ]];do echo f;sleep 10;done && python timetorunpython.pyread type;
case $type in
'0')
echo 'how'
;;
'1')
echo 'are'
;;
'2')
echo 'you'
;;
esactime echo hisleep 10date +%F
# 2020-07-19
# or
date +'%d-%b-%Y-%H:%M:%S'
#10-Apr-2020-21:54:40sleep $[ ( $RANDOM % 5 ) + 1 ]TMOUT=10
#once you set this variable, logout timer start running!#This will run the command 'sleep 10' for only 1 second.
timeout 1 sleep 10at now + 1min #time-units can be minutes, hours, days, or weeks
warning: commands will be executed using /bin/sh
at> echo hihigithub >~/itworks
at> <EOT> # press Ctrl + D to exit
job 1 at Wed Apr 18 11:16:00 2018curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -f markdown -t man | man -l -
# or w3m (a text based web browser and pager)
curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc | w3m -T text/html
# or using emacs (in emac text editor)
emacs --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)
# or using emacs (on terminal, exit using Ctrl + x then Ctrl + c)
emacs -nw --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)wget -r -l1 -H -t1 -nd -N -np -A mp3 -e robots=off http://example.com
# -r: recursive and download all links on page
# -l1: only one level link
# -H: span host, visit other hosts
# -t1: numbers of retries
# -nd: don't make new directories, download to here
# -N: turn on timestamp
# -nd: no parent
# -A: type (separate by ,)
# -e robots=off: ignore the robots.txt file which stop wget from crashing the site, sorry example.comUpload a file to web and download (https://transfer.sh/)
# Upload a file (e.g. filename.txt):
curl --upload-file ./filename.txt https://transfer.sh/filename.txt
# the above command will return a URL, e.g: https://transfer.sh/tG8rM/filename.txt
# Next you can download it by:
curl https://transfer.sh/tG8rM/filename.txt -o filename.txtdata=file.txt
url=http://www.example.com/$data
if [ ! -s $data ];then
echo "downloading test data..."
wget $url
fiwget -O filename "http://example.com"wget -P /path/to/directory "http://example.com"curl -L google.comsudo apt install pwgen
pwgen 13 5
#sahcahS9dah4a xieXaiJaey7xa UuMeo0ma7eic9 Ahpah9see3zai acerae7Huigh7shuf -n 100 filenamefor i in a b c d e; do echo $i; done| shufEcho series of random numbers between a range (e.g. shuffle numbers from 0-100, then pick 15 of them randomly)
shuf -i 0-100 -n 15echo $RANDOMecho $((RANDOM % 10))echo $(((RANDOM %10)+1))X11 GUI applications! Here are some GUI tools for you if you get bored by the text-only environment.
ssh -X user_name@ip_address
# or setting through xhost
# --> Install the following for Centos:
# xorg-x11-xauth
# xorg-x11-fonts-*
# xorg-x11-utilsxclock
xeyes
xcowsay1. ssh -X user_name@ip_address
2. apt-get install eog
3. eog picture.png1. ssh -X user_name@ip_address
2. sudo apt install mpv
3. mpv myvideo.mp41. ssh -X user_name@ip_address
2. apt-get install gedit
3. gedit filename.txt1. ssh -X user_name@ip_address
2. apt-get install evince
3. evince filename.pdf1. ssh -X user_name@ip_address
2. apt-get install libxss1 libappindicator1 libindicator7
3. wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
4. sudo apt-get install -f
5. dpkg -i google-chrome*.deb
6. google-chrome# List yum history (e.g install, update)
sudo yum history
# Example output:
# Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
# ID | Login user | Date and time | Action(s) | Altered
# -------------------------------------------------------------------------------
# 11 | ... <myuser> | 2020-04-10 10:57 | Install | 1 P<
# 10 | ... <myuser> | 2020-03-27 05:21 | Install | 1 >P
# 9 | ... <myuser> | 2020-03-05 11:57 | I, U | 56 *<
# ...
# Show more details of a yum history (e.g. history #11)
sudo yum history info 11
# Undo a yum history (e.g. history #11, this will uninstall some packages)
sudo yum history undo 11# To audit a directory recursively for changes (e.g. myproject)
auditctl -w /path/to/myproject/ -p wa
# If you delete a file name "VIPfile", the deletion is recorded in /var/log/audit/audit.log
sudo grep VIPfile /var/log/audit/audit.log
#type=PATH msg=audit(1581417313.678:113): item=1 name="VIPfile" inode=300115 dev=ca:01 mode=0100664 ouid=1000 ogid=1000 rdev=00:00 nametype=DELETE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0sestatus
# SELinux status: enabled
# SELinuxfs mount: /sys/fs/selinux
# SELinux root directory: /etc/selinux
# Loaded policy name: targeted
# Current mode: enforcing
# Mode from config file: enforcing
# Policy MLS status: enabled
# Policy deny_unknown status: allowed
# Max kernel policy version: 31ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pubssh-copy-id <user_name>@<server_IP>
# then you need to enter the password
# and next time you won't need to enter password when ssh to that userCopy default public key to remote user using the required private key (e.g. use your mykey.pem key to copy your id_rsa.pub to the remote user)
# before you need to use mykey.pem to ssh to remote user.
ssh-copy-id -i ~/.ssh/id_rsa.pub -o "IdentityFile ~/Downloads/mykey.pem" <user_name>@<server_IP>
# now you don't need to use key to ssh to that user.# To bring your key with you when ssh to serverA, then ssh to serverB from serverA using the key.
ssh-agent
ssh-add /path/to/mykey.pem
ssh -A <username>@<IP_of_serverA>
# Next you can ssh to serverB
ssh <username>@<IP_of_serverB># add the following to ~/.ssh/config
Host myserver
User myuser
IdentityFile ~/path/to/mykey.pem
# Next, you could run "ssh myserver" instead of "ssh -i ~/path/to/mykey.pem myuser@myserver"journalctl -u <service_name> -f# A zombie is already dead, so you cannot kill it. You can eliminate the zombie by killing its parent.
# First, find PID of the zombie
ps aux| grep 'Z'
# Next find the PID of zombie's parent
pstree -p -s <zombie_PID>
# Then you can kill its parent and you will notice the zombie is gone.
sudo kill 9 <parent_PID>free -c 10 -mhs 1
# print 10 times, at 1 second interval# refresh every second
iostat -x -t 1iftop -i enp175s0f0uptimeif [ "$EUID" -ne 0 ]; then
echo "Please run this as root"
exit 1
fichsh -s /bin/sh bonnie
# /etc/shells: valid login shellschroot /home/newroot /bin/bash
# To exit chroot
exitstat filename.txtps auxpstreecat /proc/sys/kernel/pid_maxdmesg$ip add show
# or
ifconfigrunlevel
# or
who -rinit 5
#or
telinit 5chkconfig --list
# update-rc.d equivalent to chkconfig in ubuntucat /etc/*-releaseman hier# e.g. check the status of cron service
systemctl status cron.service
# e.g. stop cron service
systemctl stop cron.servicejobs -l# nice value is adjustable from -20 (most favorable) to +19
# the nicer the application, the lower the priority
# Default niceness: 10; default priority: 80
nice -10 ./test.shexport PATH=$PATH:~/path/you/wantchmod +x filename
# you can now ./filename to execute ituname -a
# Check system hardware-platform (x86-64)
uname -ilinks www.google.comuseradd username
passwd username1. vi ~/.bash_profile
2. export PS1='\u@\h:\w\$'
# $PS1 is a variable that defines the makeup and style of the command prompt
# You could use emojis and add timestamp to every prompt using the following value:
# export PS1="\t@🦁:\w\$ "
3. source ~/.bash_profile1. vi ~/.bash_profile
2. alias pd="pwd" //no more need to type that 'w'!
3. source ~/.bash_profilealias -punalias ls# print all shell options
shopt
# to unset (or stop) alias
shopt -u expand_aliases
# to set (or start) alias
shopt -s expand_aliasesecho $PATH
# list of directories separated by a colonenvunset MYVARlsblkpartprobeln -s /path/to/program /home/usr/bin
# must be the whole path to the programhexdump -C filename.classrsh node_namenetstat -tulpnreadlink filenametype python
# python is /usr/bin/python
# There are 5 different types, check using the 'type -f' flag
# 1. alias (shell alias)
# 2. function (shell function, type will also print the function body)
# 3. builtin (shell builtin)
# 4. file (disk file)
# 5. keyword (shell reserved word)
# You can also use `which`
which python
# /usr/bin/pythondeclare -Fdu -hs .
# or
du -sbcp -rp /path/to/directorypushd .
# then pop
popd
#or use dirs to display the list of currently remembered directories.
dirs -ldf -h
# or
du -h
#or
du -sk /var/log/* |sort -rn |head -10df -THrunlevelinit 3
#or
telinit 31. edit /etc/init/rc-sysinit.conf
2. env DEFAULT_RUNLEVEL=2susu somebodyrepquota -auvsgetent database_name
# (e.g. the 'passwd' database)
getent passwd
# list all user account (all local and LDAP)
# (e.g. fetch list of grop accounts)
getent group
# store in database 'group'chown user_name filename
chown -R user_name /path/to/directory/
# chown user:group filename# e.g. Mount /dev/sdb to /home/test
mount /dev/sdb /home/test
# e.g. Unmount /home/test
umount /home/testmount
# or
dfcat /etc/passwdgetent passwd| awk '{FS="[:]"; print $1}'compgen -ucompgen -ggroup usernameid username
# variable for UID
echo $UIDif [ $(id -u) -ne 0 ];then
echo "You are not root!"
exit;
fi
# 'id -u' output 0 if it's not rootmore /proc/cpuinfo
# or
lscpusetquota username 120586240 125829120 0 0 /homequota -v usernameldconfig -pldd /bin/lslastloglast rebootjoe /etc/environment
# edit this fileulimit -unproc --all1. top
2. press '1'
jobs -lservice --status-allshutdown -r +5 "Server will restart in 5 minutes. Please save your work."shutdown -cwall -n hihipkill -U user_namekill -9 $(ps aux | grep 'program_name' | awk '{print $2}')# You might have to install the following:
apt-get install libglib2.0-bin;
# or
yum install dconf dconf-editor;
yum install dbus dbus-x11;
# Check list
gsettings list-recursively
# Change some settings
gsettings set org.gnome.gedit.preferences.editor highlight-current-line true
gsettings set org.gnome.gedit.preferences.editor scheme 'cobalt'
gsettings set org.gnome.gedit.preferences.editor use-default-font false
gsettings set org.gnome.gedit.preferences.editor editor-font 'Cantarell Regular 12'
Add user to a group (e.g add user 'nice' to the group 'docker', so that he can run docker without sudo)
sudo gpasswd -a nice docker1. pip install --user package_name
2. You might need to export ~/.local/bin/ to PATH: export PATH=$PATH:~/.local/bin/1. uname -a #check current kernel, which should NOT be removed
2. sudo apt-get purge linux-image-X.X.X-X-generic #replace old versionsudo hostname your-new-name
# if not working, do also:
hostnamectl set-hostname your-new-hostname
# then check with:
hostnamectl
# Or check /etc/hostname
# If still not working..., edit:
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-ensxxx
#add HOSTNAME="your-new-hostname"apt list --installed
# or on Red Hat:
yum list installedapt list --upgradeable
# or
sudo yum check-updatesudo yum update --exclude=php*lsof /mnt/dirkillall pulseaudio
# then press Alt-F2 and type in pulseaudiokillall pulseaudiolsscsihttp://onceuponmine.blogspot.tw/2017/08/set-up-your-own-dns-server.html
http://onceuponmine.blogspot.tw/2017/07/create-your-first-simple-daemon.html
http://onceuponmine.blogspot.tw/2017/10/setting-up-msmtprc-and-use-your-gmail.html
Using telnet to test open ports, test if you can connect to a port (e.g 53) of a server (e.g 192.168.2.106)
telnet 192.168.2.106 53ifconfig eth0 mtu 9000pidof python
# or
ps aux|grep pythonps -p <PID>
#or
cat /proc/<PID>/status
cat /proc/<PID>/stack
cat /proc/<PID>/stat# Start ntp:
ntpd
# Check ntp:
ntpq -psudo apt-get autoremove
sudo apt-get clean
sudo rm -rf ~/.cache/thumbnails/*
# Remove old kernal:
sudo dpkg --list 'linux-image*'
sudo apt-get remove linux-image-OLDER_VERSIONpvscan
lvextend -L +130G /dev/rhel/root -r
# Adding -r will grow filesystem after resizing the volume.sudo dd if=~/path/to/isofile.iso of=/dev/sdc1 oflag=direct bs=1048576sudo dpkg -l | grep <package_name>
sudo dpkg --purge <package_name>ssh -f -L 9000:targetservername:8088 root@192.168.14.72 -N
#-f: run in background; -L: Listen; -N: do nothing
#the 9000 of your computer is now connected to the 8088 port of the targetservername through 192.168.14.72
#so that you can see the content of targetservername:8088 by entering localhost:9000 from your browser.#pidof
pidof sublime_text
#pgrep, you don't have to type the whole program name
pgrep sublim
#pgrep, echo 1 if process found, echo 0 if no such process
pgrep -q sublime_text && echo 1 || echo 0
#top, takes longer time
top|grep sublime_textaio-stress - AIO benchmark.
bandwidth - memory bandwidth benchmark.
bonnie++ - hard drive and file system performance benchmark.
dbench - generate I/O workloads to either a filesystem or to a networked CIFS or NFS server.
dnsperf - authorative and recursing DNS servers.
filebench - model based file system workload generator.
fio - I/O benchmark.
fs_mark - synchronous/async file creation benchmark.
httperf - measure web server performance.
interbench - linux interactivity benchmark.
ioblazer - multi-platform storage stack micro-benchmark.
iozone - filesystem benchmark.
iperf3 - measure TCP/UDP/SCTP performance.
kcbench - kernel compile benchmark, compiles a kernel and measures the time it takes.
lmbench - Suite of simple, portable benchmarks.
netperf - measure network performance, test unidirectional throughput, and end-to-end latency.
netpipe - network protocol independent performance evaluator.
nfsometer - NFS performance framework.
nuttcp - measure network performance.
phoronix-test-suite - comprehensive automated testing and benchmarking platform.
seeker - portable disk seek benchmark.
siege - http load tester and benchmark.
sockperf - network benchmarking utility over socket API.
spew - measures I/O performance and/or generates I/O load.
stress - workload generator for POSIX systems.
sysbench - scriptable database and system performance benchmark.
tiobench - threaded IO benchmark.
unixbench - the original BYTE UNIX benchmark suite, provide a basic indicator of the performance of a Unix-like system.
wrk - HTTP benchmark.
# installation
# It collects the data every 10 minutes and generate its report daily. crontab file (/etc/cron.d/sysstat) is responsible for collecting and generating reports.
yum install sysstat
systemctl start sysstat
systemctl enable sysstat
# show CPU utilization 5 times every 2 seconds.
sar 2 5
# show memory utilization 5 times every 2 seconds.
sar -r 2 5
# show paging statistics 5 times every 2 seconds.
sar -B 2 5
# To generate all network statistic:
sar -n ALL
# reading SAR log file using -f
sar -f /var/log/sa/sa31|tail
##### Reading from journal file
```bash
journalctl --file ./log/journal/a90c18f62af546ccba02fa3734f00a04/system.journal --since "2020-02-11 00:00:00"lastbwhowuserstail -f --pid=<PID> filename.txt
# replace <PID> with the process ID of the program.systemctl list-unit-files|grep enabledlshw -json >report.json
# Other options are: [ -html ] [ -short ] [ -xml ] [ -json ] [ -businfo ] [ -sanitize ] ,etcsudo dmidecode -t memorydmidecode -t 4
# Type Information
# 0 BIOS
# 1 System
# 2 Base Board
# 3 Chassis
# 4 Processor
# 5 Memory Controller
# 6 Memory Module
# 7 Cache
# 8 Port Connector
# 9 System Slots
# 11 OEM Strings
# 13 BIOS Language
# 15 System Event Log
# 16 Physical Memory Array
# 17 Memory Device
# 18 32-bit Memory Error
# 19 Memory Array Mapped Address
# 20 Memory Device Mapped Address
# 21 Built-in Pointing Device
# 22 Portable Battery
# 23 System Reset
# 24 Hardware Security
# 25 System Power Controls
# 26 Voltage Probe
# 27 Cooling Device
# 28 Temperature Probe
# 29 Electrical Current Probe
# 30 Out-of-band Remote Access
# 31 Boot Integrity Services
# 32 System Boot
# 34 Management Device
# 35 Management Device Component
# 36 Management Device Threshold Data
# 37 Memory Channel
# 38 IPMI Device
# 39 Power Supplylsscsi|grep SEAGATE|wc -l
# or
sg_map -i -x|grep SEAGATE|wc -llsblk -f /dev/sdb
# or
sudo blkid /dev/sdbuuidgenlsblk -io KNAME,TYPE,MODEL,VENDOR,SIZE,ROTA
#where ROTA means rotational device / spinning hard disks (1 if true, 0 if false)lspci
# List information about NIC
lspci | egrep -i --color 'network|ethernet'lsusb# Show the status of modules in the Linux Kernel
lsmod
# Add and remove modules from the Linux Kernel
modprobe
# or
# Remove a module
rmmod
# Insert a module
insmod# Remotely finding out power status of the server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power status
# Remotely switching on server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power on
# Turn on panel identify light (default 15s)
ipmitool chassis identify 255
# Found out server sensor temperature
ipmitool sensors |grep -i Temp
# Reset BMC
ipmitool bmc reset cold
# Prnt BMC network
ipmitool lan print 1
# Setting BMC network
ipmitool -I bmc lan set 1 ipaddr 192.168.0.55
ipmitool -I bmc lan set 1 netmask 255.255.255.0
ipmitool -I bmc lan set 1 defgw ipaddr 192.168.0.1dig +short www.example.com
# or
host www.example.comdig -t txt www.example.com
# or
host -t txt www.example.comSend a ping with a limited TTL to 10 (TTL: Time-To-Live, which is the maximum number of hops that a packet can travel across the Internet before it gets discarded.)
ping 8.8.8.8 -t 10traceroute google.comnc -vw5 google.com 80
# Connection to google.com 80 port [tcp/http] succeeded!
nc -vw5 google.com 22
# nc: connect to google.com port 22 (tcp) timed out: Operation now in progress
# nc: connect to google.com port 22 (tcp) failed: Network is unreachable# From server A:
$ sudo nc -l 80
# then you can connect to the 80 port from another server (e.g. server B):
# e.g. telent <server A IP address> 80
# then type something in server B
# and you will see the result in server A!#notice that some companies might not like you using nmap
nmap -sT -O localhost
# check port 0-65535
nmap -p0-65535 localhost#skips checking if the host is alive which may sometimes cause a false positive and stop the scan.
$ nmap google.com -Pn
# Example output:
# Starting Nmap 7.01 ( https://nmap.org ) at 2020-07-18 22:59 CST
# Nmap scan report for google.com (172.217.24.14)
# Host is up (0.013s latency).
# Other addresses for google.com (not scanned): 2404:6800:4008:802::200e
# rDNS record for 172.217.24.14: tsa01s07-in-f14.1e100.net
# Not shown: 998 filtered ports
# PORT STATE SERVICE
# 80/tcp open http
# 443/tcp open https
#
# Nmap done: 1 IP address (1 host up) scanned in 3.99 seconds$ nmap -A -T4 scanme.nmap.org
# -A to enable OS and version detection, script scanning, and traceroute; -T4 for faster executionwhois google.comopenssl s_client -showcerts -connect www.example.com:443ip aip rDisplay ARP cache (ARP cache displays the MAC addresses of device in the same network that you have connected to)
ip nip address add 192.168.140.3/24 dev eno16777736sudo vi /etc/sysconfig/network-scripts/ifcfg-enoxxx
# then edit the fields: BOOTPROT, DEVICE, IPADDR, NETMASK, GATEWAY, DNS1 etcsudo nmcli c reloadsudo systemctl restart network.servicehostnamectlhostnamectl set-hostname "mynode"curl -I http://example.com/
# HTTP/1.1 200 OK
# Server: nginx
# Date: Thu, 02 Jan 2020 07:01:07 GMT
# Content-Type: text/html
# Content-Length: 1119
# Connection: keep-alive
# Vary: Accept-Encoding
# Last-Modified: Mon, 09 Sep 2019 10:37:49 GMT
# ETag: "xxxxxx"
# Accept-Ranges: bytes
# Vary: Accept-Encodingcurl -s -o /dev/null -w "%{http_code}" https://www.google.comcurl -s -o /dev/null -w "%{redirect_url}" https://bit.ly/34EFwWC# server side:
$ sudo iperf -s -p 80
# client side:
iperf -c <server IP address> --parallel 2 -i 1 -t 2 -p 80sudo iptables -A INPUT -p tcp --dport 80 -j DROP
# only block connection from an IP address
sudo iptables –A INPUT –s <IP> -p tcp –dport 80 –j DROP# If file is not specified, the file /usr/share/dict/words is used.
look phy|head -n 10
# Phil
# Philadelphia
# Philadelphia's
# Philby
# Philby's
# Philip
# Philippe
# Philippe's
# Philippians
# Philippineprintf 'hello world\n%.0s' {1..5}username=`echo -n "bashoneliner"`tee <fileA fileB fileC fileD >/dev/nulltr -dc '[:print:]' < filenametr --delete '\n' <input.txt >output.txttr '\n' ' ' <filenametr /a-z/ /A-Z/
echo 'something' |tr a-z a
# aaaaaaaaadiff fileA fileB
# a: added; d:delete; c:changed
# or
sdiff fileA fileB
# side-to-side merge of file differences diff fileA fileB --strip-trailing-crnl fileA
#or
nl -nrz fileA
# add leading zeros
#or
nl -w1 -s ' '
# making it simple, blank separateJoin two files field by field with tab (default join by the first column of both file, and default separator is space)
# fileA and fileB should have the same ordering of lines.
join -t '\t' fileA fileB
# Join using specified field (e.g. column 3 of fileA and column 5 of fileB)
join -1 3 -2 5 fileA fileBpaste fileA fileB fileC
# default tab separate# e.g.
# AAAA
# BBBB
# CCCC
# DDDD
cat filename|paste - -
# AAAABBBB
# CCCCDDDD
cat filename|paste - - - -
# AAAABBBBCCCCDDDDcat file.fastq | paste - - - - | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' >file.faecho 12345| revseq 10i=`wc -l filename|cut -d ' ' -f1`; cat filename| echo "scale=2;(`paste -sd+`)/"$i|bcecho {1,2}{1,2}
# 1 1, 1 2, 2 1, 2 2set = {A,T,C,G}
group= 5
for ((i=0; i<$group; i++));do
repetition=$set$repetition;done
bash -c "echo "$repetition""foo=$(<test1)echo ${#foo}echo -e ' \t '# Split by line (e.g. 1000 lines/smallfile)
split -d -l 1000 largefile.txt
# Split by byte without breaking lines across files
split -C 10 largefile.txt#1. Create a big file
dd if=/dev/zero of=bigfile bs=1 count=1000000
#2. Split the big file to 100000 10-bytes files
split -b 10 -a 10 bigfilerename 's/ABC//' *.gzbasename filename.gz .gz
zcat filename.gz> $(basename filename.gz .gz).unpackedrename s/$/.txt/ *
# You can use rename -n s/$/.txt/ * to check the result first, it will only print sth like this:
# rename(a, a.txt)
# rename(b, b.txt)
# rename(c, c.txt)tr -s "/t" < filenameecho -e 'text here \c'head -c 50 filecat file|rev | cut -d/ -f1 | rev((var++))
# or
var=$((var+1))
cat filename|rev|cut -f1|revcat >myfile
let me add sth here
exit by control + c
^C>filenameecho 'hihi' >>filename#install the useful jq package
#sudo apt-get install jq
#e.g. to get all the values of the 'url' key, simply pipe the json to the following jq command(you can use .[]. to select inner json, i.e jq '.[].url')
cat file.json | jq '.url'D2B=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
echo -e ${D2B[5]}
#00000101
echo -e ${D2B[255]}
#11111111echo "00110010101110001101" | fold -w4
# 0011
# 0010
# 1011
# 1000
# 1101sort -k3,3 -scat file.txt|rev|column -t|revecho 'hihihihi' | tee outputfile.txt
# use '-a' with tee to append to file.cat -v filenameexpand filenameunexpand filenameod filenametac filenamewhile read a b; do yes $b |head -n $a ;done <test.txtidentify myimage.png
#myimage.png PNG 1049x747 1049x747+0+0 8-bit sRGB 1.006MB 0.000u 0:00.000Bash auto-complete (e.g. show options "now tomorrow never" when you press'tab' after typing "dothis")
complete -W "now tomorrow never" dothis
# ~$ dothis
# never now tomorrow
# press 'tab' again to auto-complete after typing 'n' or 't'# print the current month, today will be highlighted.
cal
# October 2019
# Su Mo Tu We Th Fr Sa
# 1 2 3 4 5
# 6 7 8 9 10 11 12
# 13 14 15 16 17 18 19
# 20 21 22 23 24 25 26
# 27 28 29 30 31
# only display November
cal -m 11export LC_ALL=C
# to revert:
unset LC_ALLecho test|base64
#dGVzdAo=dirname `pwd`zmore filename
# or
zless filenamesome_commands &>log &
# or
some_commands 2>log &
# or
some_commands 2>&1| tee logfile
# or
some_commands |& tee logfile
# or
some_commands 2>&1 >>outfile
#0: standard input; 1: standard output; 2: standard error# run sequentially
(sleep 2; sleep 3) &
# run parallelly
sleep 2 & sleep 3 &# e.g. Run myscript.sh even when log out.
nohup bash myscript.shecho 'heres the content'| mail -a /path/to/attach_file.txt -s 'mail.subject' me@gmail.com
# use -a flag to set send from (-a "From: some@mail.tld")xls2csv filenamespeaker-test -t sine -f 1000 -l1(speaker-test -t sine -f 1000) & pid=$!;sleep 0.1s;kill -9 $pidhistory -w
vi ~/.bash_history
history -r
#or
history -d [line_number]# list 5 previous command (similar to `history |tail -n 5` but wont print the history command itself)
fc -l -5Ctrl+U
# or
Ctrl+C
# or
Alt+Shift+#
# to make it to history# addmetodistory
# just add a "#" before~~head !$clear
# or simply Ctrl+lrsync -av filename filename.bak
rsync -av directory directory.bak
rsync -av --ignore_existing directory/ directory.bak
rsync -av --update directory directory.bak
rsync -av directory user@ip_address:/path/to/directory.bak
# skip files that are newer on receiver (i prefer this one!)mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat}
# -p: make parent directory
# this will create project/doc/html/; project/doc/info; project/lib/ext ,etccd tmp/ && tar xvf ~/a.tarcd tmp/a/b/c ||mkdir -p tmp/a/b/ccd tmp/a/b/c \
> || \
>mkdir -p tmp/a/b/cfile /tmp/
# tmp/: directory#!/bin/bash
file=${1#*.}
# remove string before a "."python -m SimpleHTTPServer
# or when using python3:
python3 -m http.serverread input
echo $inputdeclare -a array=()
# or
declare array=()
# or associative array
declare -A array=()scp -r directoryname user@ip:/path/to/send# Don't try this at home!
# It is a function that calls itself twice every call until you run out of system resources.
# A '# ' is added in front for safety reason, remove it when seriously you are testing it.
# :(){:|:&};:!$echo $?unxz filename.tar.xz
# then
tar -xf filename.tar
tar xvfj file.tar.bz2unxz file.tar.xz
tar xopf file.tartar xvf -C /path/to/directory filename.gz# First cd to the directory, they run:
zip -r -D ../myzipfile .
# you will see the myzipfile.zip in the parent directory (cd ..)# 'y':
yes
# or 'n':
yes n
# or 'anything':
yes anything
# For example:
```bash
yes | rm -r large_directoryfallocate -l 10G 10Gigfiledd if=/dev/zero of=//dev/shm/200m bs=1024k count=200
# or
dd if=/dev/zero of=//dev/shm/200m bs=1M count=200
# Standard output:
# 200+0 records in
# 200+0 records out
# 209715200 bytes (210 MB) copied, 0.0955679 s, 2.2 GB/swatch -n 1 wc -l filenameset -x; echo `expr 10 + 20 `fortunehtopread -rsp $'Press any key to continue...\n' -n1 key# download:
# https://github.com/harelba/q
# example:
q -d "," "select c3,c4,c5 from /path/to/file.txt where c3='foo' and c5='boo'"# Create session and attach:
screen
# Create detached session foo:
screen -S foo -d -m
# Detached session foo:
screen: ^a^d
# List sessions:
screen -ls
# Attach last session:
screen -r
# Attach to session foo:
screen -r foo
# Kill session foo:
screen -r foo -X quit
# Scroll:
Hit your screen prefix combination (C-a / control+A), then hit Escape.
Move up/down with the arrow keys (↑ and ↓).
# Redirect output of an already running process in Screen:
(C-a / control+A), then hit 'H'
# Store screen output for Screen:
Ctrl+A, Shift+H
# You will then find a screen.log file under current directory.# Create session and attach:
tmux
# Attach to session foo:
tmux attach -t foo
# Detached session foo:
^bd
# List sessions:
tmux ls
# Attach last session:
tmux attach
# Kill session foo:
tmux kill-session -t foo
# Create detached session foo:
tmux new -s foo -d
# Send command to all panes in tmux:
Ctrl-B
:setw synchronize-panes
# Some tmux pane control commands:
Ctrl-B
# Panes (splits), Press Ctrl+B, then input the following symbol:
# % horizontal split
# " vertical split
# o swap panes
# q show pane numbers
# x kill pane
# space - toggle between layouts
# Distribute Vertically (rows):
select-layout even-vertical
# or
Ctrl+b, Alt+2
# Distribute horizontally (columns):
select-layout even-horizontal
# or
Ctrl+b, Alt+1
# Scroll
Ctrl-b then \[ then you can use your normal navigation keys to scroll around.
Press q to quit scroll mode.sshpass -p mypassword ssh root@10.102.14.88 "df -h"wait %1
# or
wait $PID
wait ${!}
#wait ${!} to wait till the last background process ($! is the PID of the last background process)sudo apt-get install poppler-utils
pdftotext example.pdf example.txtls -d */ls -1
# or list all, do not ignore entries starting with .
ls -1ascript output.txt
# start using terminal
# to logout the screen session (stop saving the contents), type exit.tree
# go to the directory you want to list, and type tree (sudo apt-get install tree)
# output:
# home/
# └── project
# ├── 1
# ├── 2
# ├── 3
# ├── 4
# └── 5
#
# set level directories deep (e.g. level 1)
tree -L 1
# home/
# └── project# 1. install virtualenv.
sudo apt-get install virtualenv
# 2. Create a directory (name it .venv or whatever name your want) for your new shiny isolated environment.
virtualenv .venv
# 3. source virtual bin
source .venv/bin/activate
# 4. you can check check if you are now inside a sandbox.
type pip
# 5. Now you can install your pip package, here requirements.txt is simply a txt file containing all the packages you want. (e.g tornado==4.5.3).
pip install -r requirements.txtMore coming!!