Skip to content
/ scripts Public

Some of my scripts which could be somewhat helpful. ... only for the `bash` shell now.

License

Notifications You must be signed in to change notification settings

kekse1/scripts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Every Script is made by myself, arose out of necessity.. or because I found it interesting.

Note

All of these scripts are implemented for the bash shell only!

Index

  1. News
  2. Scripts
  3. Contact
  4. Copyright and License

News

  • [2025-03-18] Moved the only non-bash script from here to it's own nproc.c repository

Moved to it's own repository.

In it's own repository.

You should put this script into your /etc/profile.d/ directory, so the lines() function will get sourced. Then just call it this way - possible parameters are described on top of this bash shell script file.

Simple script you can use with either a file path parameter or the stdin - (if defined at all), to perform one of these actions:

  • display the line count of your input
  • extract a specific line
  • extract an area of lines
  • negative numbers counting backwards from the EOF

The most important thing for me was to switch between keyboard layouts - easily with a shortcut I've set up in XFCE (Settings -> Keyboard): calling this script with '-' argument only, to switch between the configured layouts.

layout.sh

So either call it without arguments, so it'll show you the currently used layout. Call it with a concrete layout, to switch to it directly. Or call it with a single -, so it'll switch between the configured layouts (by default, it's on top: layouts=("us" "de")).

Tool for Gentoo Linux, Debian and Termux Linux. I'm using it to do all steps to keep your packages up2date, in just one step!

Also, just copy it to /etc/profile.d/up2date.sh

Helper to quickly update git repositories.. really tiny.

Tip

Includes a function keep() to create .keep files in empty directories. Useful for git, since it won't obey empty directories.

Finds file duplicates, or just creates an index with the file hashes..

You define a target directory and an optional depth (defaults to 1, so only the current directory), and in your target directory there'll be files with names out of their sha224sum, with original extensions.

For amd64 and arm64 (Termux): a script to build a Node.js version that you define in the command line, with target path /opt/node.js/${version}/ plus a symbolic link 0 pointing to there: so you can also manage multiple versions, or just check if the newest installation really works, before removing the old one.. the only thing left to do, just once, is to merge the fs structure under the symlink path /opt/node.js/0 into the /usr/ hierarchy.

Note

Just call it via make-nodejs.sh 23.9.0 (or make-nodejs.sh v23.9.0), e.g.!

Note

In v0.3.11 I changed the $tmdir variable to be below the /opt/nodejs/ (target) directory, due to my noexec option in my /etc/fstab for my /tmp/ (ram disk) directory/mount.

Little helper script to recursively remove all headers from images.

The primary intention is to secure all images in your web root. So e.g. when you take photos with your smartphone, they'll no longer contain the GPS coordinates, etc. ;-)

Call with -h or --help to get to know a bit more.. the help text is encoded in a variable on the file's top.

JFYI: Dependency is the exiftool, which is the packet libimage-exiftool-perl within Debian Linux.

Another helping hand which became required since I'm managing some archive on my server, which needs to be synchronized with an SB stick (using crontab, ..).

Warning

PLEASE CHECK the FIRST BOTH configuration parts, relatively on top of the file..

BTW: My target USB stick is formatted as ExFAT file system, so not all linux file permissions and attributes are supported, and also no symbolic links. So I decided to disable all these by default. If you want/need them, use the -l or --linux cmdline argument. Additionally see -d or --dereference. ;-)

[!TOP] As usual, you can also use -h or --help! :-D

It's recommended to copy this file to the /etc/profile.d/ directory.

Now also includes the progress() function: progress()

Plus some helper functions (beneath the regular style and color functions).

Note

Since v1.1.0 it enhances the line() function by colors. These can reside in the $LINE_COLOR environment variable: either three bytes or the keyword auto (which is my default if nothing is defined here). See example screenshot (below)!

Example line()

Tip

I know it since some months or so, but again I saw here the nifty thing that bash also supports kinda 'spread syntax' for function calls! Example line: colors=( 32 64 128 ); fg "${colors[@]}"; echo "colored string!"

Tool for Gentoo Linux, Debian and Termux Linux. I'm using it to do all steps to keep your packages up2date, in just one step!

Also, just copy it to /etc/profile.d/up2date.sh

Will traverse recursively through all sub directories (of current working directory) using one or more find -iname parameters (especially globs to define file extensions!), and output a list of found ones with their line counts, sorted ascending, and ending with the line count sum of all line counts.

A little helper to scp files, with only the remote file path as argument.

I'm using this to copy backups from my server, most because on errors this is going to repeat the copy (as long you define in the 'loops' variable). So just set your server {user,host,port} and copy securely.

BTW: yes, I had an unstable line when I created this.. via mobile phone.

My Norbert needed some random input data, from a directory I wanted to propagate with some temporary files (of an exactly defined file size).

So I created this very tiny tool.

Important

Dependencies: the dd utility.

Note

JFYI: Since v1.4.0 the 1st, 2nd and 3rd argument can also be negative. In this case the absolute values of them define their maximum of randomly generated params.

Tip

Feel free to extract the randomChars() and random() functions out of the file and put it into one of your /etc/profile.d/*.sh.

Downloads the latest wiki dumps. See the '$url' vector. After downloading, they'll be bunzip2d. Implemented with some checks to be absolutely sure, and also creates backups, etc. .. jfyi.

You may configure the $url vector/array on top of the file. It'll hold all the url's to the latest dumps at dumps.wikimedia.org. But expects the original .bz2 files (so the script can also extract them).

Note

Depends on the wget and the bunzip2 utility (but this will also be checked).

Tip

Before v0.1.4 there were only german dumps, now I added the english ones, too. Additionally there's now also the wikidatawiki.

Manage LUKS encryption via cryptsetup.

Note

Just began this helping script.. is still pure TODO!1

Recursive (really!) sed (regular expression) replacement in (only real!) files.

Easily compare toilet (or figlet) outputs for a list of fonts in a file (each line another font). Command line switches are passed through to the tool itself. Input texts can also be set via command line, or just wait to get asked via stdin.

For many fonts see this link; and here are the websites of toilet and figlet.

The font archive can be un-zipped in /usr/share/figlet/ (even for toilet), or rather it's fonts/ directory itself.

Functions to be sourced (so copy to /etc/profile.d/) providing conversions for size, and in the future also some more math related functions.. for now, look at the source to get to know more.

You crender an amount of bytes to GiB, etc.. base 1024 and 1000 support, and direct conversion to a specific target unit, or it'll automatically detect which suites best:

  • >> Syntax: bytes <value> [ <base=1024 | <unit> [ <prec=2> ] ]

This is just the beginning of more bash functions.

The project began with baseutils.org, which was planned as regular code (either C or JavaScript). Some first tools had been finished then.. they were planned for my Any/Linux project (with still much, much TODO).. BUT I began to take over some old functions from my /etc/profile.d/ scripts, and now here we are..

Still much TODO, but the first functions are declared and I'm going to implement everything soon!

Another tiny helper... really nothing special.

Something similar to the move-by-ext.sh helper, but here without write operations, only counting all different extensions available under the current working directory. And it's possible to limit the find recursion depth via optional first argument (needs to be positive integer).

My source code needed my (copyright) header when I published it. So I created this script, since more than just less files needed to be updated..

The usage is merely simple, look at the output when calling this script without parameters!

Tip

Use the -d or --delete parameter to unlink all of this script's backup files (*.BACKUP, or see the (only) $BACKUP variable), and use -r or --restore to restore the original files via backups.

Note

My TODO is to replace the file extension argv-parameters by full globs, to be pass-thru directed to the find command.

Some time ago I needed to setup my computer as a router (using iptables).

This was created very quickly, without much features or tests. Feel free to use it as kinda template; see this link for more.

Easily use the hfdownloader tool, to download full models from Hugging Face, a community for Large Language Models, etc.

You don't really need this script, since the hfdownloader tool is easy enough; it's rather kinda reminder' for myself..

Tip

For some more things about Artificial Intelligence, take a look at my private website, concretely at the ~intelligence area.

Important

Dependencies: Python 3 (w/ pip) and llama.cpp;

This script helps you converting hugging face models (see huggingface.co) to GGUF format .gguf, which is necessary for the transformers I listed on my website @ ~intelligence.

Tip

Preparations: python3 -m venv venv cd venv source bin/activate git clone https://github.com/ggerganov/llama.cpp.git ./bin/python3 ./bin/pip install -r llama.cpp/requirements.txt ./bin/python3 llama.cpp/convert-hf-to-gguf.py -h

Just a tiny helper, if you don't want to use the hfdownloader(.sh).

Downloads from Hugging Face with your own Token (a file) included in the HTTP request header. This massively increases the speed of your downloads, and it allows you to access (your) non-public files, and maybe more..

Expects either a URL or a file with a list of URLs as parameter. Depends on wget.

Downloads a Stream until the DURATION is reached (then wget will be stopped). I use this script for my daily download of the 'BigFM Nightlounge' podcast.

Tip

You can add this to your '/etc/crontab'. ;-)

The main reason for this script was: my Node.js projects need to handle whole block devices oder partitions. But I wanted to configure them by their (PART)UUID, so there'd be no problems when regular '/dev/sdb' or so change (which can happen, and this is a big problem!).

In Node.js there's no regular way to open devices/partitions by their (PART)UUID; additionally, I couldn't get the sizes of the partitions or drives via fstat*()..

The second reason was: using bash arrays and a special syntax to split the --pairs output into key/value etc., I wanted to leave myself a hint for future shell scripts.. and for you! Note, that I marked out for you where to use case, if you'd like to manage the key/value pairs.

Tip

There's another version available as well.. jfyi.

Tiniest.. just prints out the current cursor position in your active terminal.

The real function cursor() is only seven lines long.

I do initialize a sub part of my bigger project with the help of this script.

See the $COPY file list. And end each item without slash to only initialize it empty (even though dirs contain entries in your original project). Symbolic Links stay exactly the same (so using readlink).

Contact

Copyright and License

The Copyright is (c) Sebastian Kucharczyk, and it's licensed under the MIT (also known as 'X' or 'X11' license).

Favicon

About

Some of my scripts which could be somewhat helpful. ... only for the `bash` shell now.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages