Skip to content

lczerner/ssm

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

System Storage Manager
**********************

A single tool to manage your storage.


Description
***********

System Storage Manager provides easy to use command line interface to
manage your storage using various technologies like lvm, btrfs,
encrypted volumes and more.

In more sophisticated enterprise storage environments, management with
Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices
(md) is becoming increasingly more difficult.  With file systems added
to the mix, the number of tools needed to configure and manage storage
has grown so large that it is simply not user friendly.  With so many
options for a system administrator to consider, the opportunity for
errors and problems is large.

The btrfs administration tools have shown us that storage management
can be simplified, and we are working to bring that ease of use to
Linux filesystems in general.


Licence
*******

(C)2011 Red Hat, Inc., Lukas Czerner <lczerner@redhat.com>

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or (at
your option) any later version.

This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/>.


Commands
********


Introduction
************

System Storage Manager have several commands you can specify on the
command line as a first argument to the ssm. They all have specific
use and its own arguments, but global ssm arguments are propagated to
all commands.


Create command
**************

This command creates a new volume with defined parameters. If
**device** is provided it will be used to create a volume, hence it
will be added into the **pool** prior the volume creation (See *Add
command section*). More devices can be used to create a volume.

If the **device** is already used in the different pool, then **ssm**
will ask you whether you want to remove it from the original pool. If
you decline, or the removal fails, then the **volume** creation fails
if the *SIZE* was not provided. On the other hand, if the *SIZE* is
provided and some devices can not be added to the **pool** the volume
creation might succeed if there is enough space in the **pool**.

*POOL* name can be specified as well. If the pool exists new volume
will be created from that pool (optionally adding **device** into the
pool). However if the *POOL* does not exist **ssm** will attempt to
create a new pool with provided **device** and then create a new
volume from this pool. If **--backend** argument is omitted, the
default **ssm** backend will be used. Default backend is *lvm*.

**ssm** also supports creating RAID configuration, however some back-
ends might not support all the levels, or it might not support RAID at
all. In this case, volume creation will fail.

If **mount** point is provided **ssm** will attempt to mount the
volume after it is created. However it will fail if mountable file
system is not present on the volume.


List command
************

List informations about all detected devices, pools, volumes and
snapshots found in the system. **list** command can be used either
alone to list all the information, or you can request specific section
only.

Following sections can be specified:

{volumes | vol}
   List information about all **volumes** found in the system.

{devices | dev}
   List information about all **devices** found in the system. Some
   devices are intentionally hidden, like for example cdrom, or DM/MD
   devices since those are actually listed as volumes.

{pools | pool}
   List information about all **pools** found in the system.

{filesystems | fs}
   List information about all volumes containing **filesystems** found
   in the system.

{snapshots | snap}
   List information about all **snapshots** found in the system. Note
   that some back-ends does not support snapshotting and some can not
   distinguish between snapshot and regular volume. in this case
   **ssm** will try to recognize volume name in order to identify
   **snapshot**, but if the **ssm** regular expression does not match
   the snapshot pattern, this snapshot will not be recognized.


Remove command
**************

This command removes **item** from the system. Multiple items can be
specified. If the **item** can not be removed for some reason, it will
be skipped.

**item** can represent:

device
   Remove **device** from the pool. Note that this can not be done in
   some cases where the device is used by pool. You can use **-f**
   argument to *force* removal. If the device does not belong to any
   pool, it will be skipped.

pool
   Remove the **pool** from the system. This will also remove all
   volumes created from that pool.

volume
   Remove the **volume** from the system. Note that this will fail if
   the **volume** is mounted and it can not be *forced* with **-f**.


Resize command
**************

Change size of the **volume** and file system. If there is no file
system only the **volume** itself will be resized. You can specify
**device** to add into the **volume** pool prior the resize. Note that
**device** will only be added into the pool if the **volume** size is
going to grow.

If the **device** is already used in the different pool, then **ssm**
will ask you whether you want to remove it from the original pool.

In some cases file system has to be mounted in order to resize. This
will be handled by **ssm** automatically by mounting the **volume**
temporarily.


Check command
*************

Check the file system consistency on the **volume**. You can specify
multiple volumes to check. If there is no file system on the
**volume**, this **volume** will be skipped.

In some cases file system has to be mounted in order to check the file
system This will be handled by **ssm** automatically by mounting the
**volume** temporarily.


Snapshot command
****************

Take a snapshot of existing **volume**. This operation will fail if
back-end which the **volume** belongs to does not support
snapshotting. Note that you can not specify both *NAME* and *DESC*
since those options are mutually exclusive.

In some cases file system has to be mounted in order to take a
snapshot of the **volume**. This will be handled by **ssm**
automatically by mounting the **volume** temporarily.


Add command
***********

This command adds **device** into the pool. The **device** will not be
added if it's already part of different pool by default, but user will
be asked whether to remove the device from it's pool. When multiple
devices are provided, all of them are added into the pool. If one of
the devices can not be added into the pool for any reason, add command
will fail. If no pool is specified, default pool will be chosen. In
the case of non existing pool, it will be created using provided
devices.


Backends
********


Introduction
************

Ssm aims to create unified user interface for various technologies
like Device Mapper (dm), Btrfs file system, Multiple Devices (md) and
possibly more. In order to do so we have a core abstraction layer in
"ssmlib/main.py". This abstraction layer should ideally know nothing
about the underlying technology, but rather comply with **device**,
**pool** and **volume** abstraction.

Various backends can be registered in "ssmlib/main.py" in order to
handle specific storage technology implementing methods like *create*,
*snapshot*, or *remove* volumes and pools. The core will then call
these methods to manage the storage without needing to know what lies
underneath it. There are already several backends registered in ssm.


Btrfs backend
*************

Btrfs is the file system with many advanced features including volume
management. This is the reason why btrfs is handled differently than
other *conventional* file systems in **ssm**. It is used as a volume
management back-end.

Pools, volumes and snapshots can be created with btrfs backend and
here is what it means from the btrfs point of view:

pool
   Pool is actually a btrfs file system itself, because it can be
   extended by adding more devices, or shrink by removing devices from
   it. Subvolumes and snapshots can also be created. When the new
   btrfs pool should be created **ssm** simply creates a btrfs file
   system, which means that every new btrfs pool has one volume of the
   same name as the pool itself which can not be removed without
   removing the entire pool. Default btrfs pool name is
   **btrfs_pool**.

   When creating new btrfs pool, the name of the pool is used as the
   file system label. If there is already existing btrfs file system
   in the system without a label, btrfs pool name will be generated
   for internal use in the following format "btrfs_{device base
   name}".

   Btrfs pool is created when **create** or **add** command is used
   with devices specified and non existing pool name.

volume
   Volume in btrfs back-end is actually just btrfs subvolume with the
   exception of the first volume created on btrfs pool creation, which
   is the file system itself. Subvolumes can only be created on btrfs
   file system when the it is mounted, but user does not have to worry
   about that, since **ssm** will automatically mount the file system
   temporarily in order to create a new subvolume.

   Volume name is used as subvolume path in the btrfs file system and
   every object in this path must exists in order to create a volume.
   Volume name for internal tracking and for representing to the user
   is generated in the format "{pool_name}:{volume name}", but volumes
   can be also referenced with its mount point.

   Btrfs volumes are only shown in the *list* output, when the file
   system is mounted, with the exception of the main btrfs volume -
   the file system itself.

   New btrfs volume can be created with **create** command.

snapshot
   Btrfs file system support subvolume snapshotting, so you can take a
   snapshot of any btrfs volume in the system with **ssm**. However
   btrfs does not distinguish between subvolumes and snapshots,
   because snapshot actually is just a subvolume with some block
   shared with different subvolume. It means, that **ssm** is not able
   to recognize btrfs snapshot directly, but instead it is trying to
   recognize special name format of the btrfs volume. However, if the
   *NAME* is specified when creating snapshot which does not match the
   special pattern, snapshot will not be recognized by the **ssm** and
   it will be listed as regular btrfs volume.

   New btrfs snapshot can be created with **snapshot** command.

device
   Btrfs does not require any special device to be created on.


Lvm backend
***********

Pools, volumes and snapshots can be created with lvm, which pretty
much match the lvm abstraction.

pool
   Lvm pool is just *volume group* in lvm language. It means that it
   is grouping devices and new logical volumes can be created out of
   the lvm pool. Default lvm pool name is **lvm_pool**.

   Lvm pool is created when **create** or **add** command is used with
   devices specified and non existing pool name.

volume
   Lvm volume is just *logical volume* in lvm language. Lvm volume can
   be created wit **create** command.

snapshot
   Lvm volumes can be snapshotted as well. When a snapshot is created
   from the lvm volume, new *snapshot* volume is created, which can be
   handled as any other lvm volume. Unlike *btrfs* lvm is able to
   distinguish snapshot from regular volume, so there is no need for a
   snapshot name to match special pattern.

device
   Lvm requires *physical device* to be created on the device, but
   with **ssm** this is transparent for the user.


Crypt backend
*************

Crypt backend in **ssm** is currently limited to only gather the
information about encrypted volumes in the system. You can not create
or manage encrypted volumes or pools, but it will be extended in the
future.


Environment variables
*********************

SSM_DEFAULT_BACKEND
   Specify which backend will be used by default. This can be
   overridden by specifying **-b** or **--backend** argument.
   Currently only *lvm* and *btrfs* is supported.

SSM_LVM_DEFAULT_POOL
   Name of the default lvm pool to be used if **-p** or **--pool**
   argument is omitted.

SSM_BTRFS_DEFAULT_POOL
   Name of the default btrfs pool to be used if **-p** or **--pool**
   argument is omitted.

SSM_PREFIX_FILTER
   When this is set **ssm** will filter out all devices, volumes and
   pools which name does not start with this prefix. It is used mainly
   in **ssm** test suite to make sure that we do not scramble local
   system configuration.


Quick examples
**************

List system storage:

   # ssm list
   ----------------------------------
   Device          Total  Mount point
   ----------------------------------
   /dev/loop0    5.00 GB
   /dev/loop1    5.00 GB
   /dev/loop2    5.00 GB
   /dev/loop3    5.00 GB
   /dev/loop4    5.00 GB
   /dev/sda    149.05 GB  PARTITIONED
   /dev/sda1    19.53 GB  /
   /dev/sda2    78.12 GB
   /dev/sda3     1.95 GB  SWAP
   /dev/sda4     1.00 KB
   /dev/sda5    49.44 GB  /mnt/test
   ----------------------------------
   ------------------------------------------------------------------------------
   Volume     Pool      Volume size  FS     FS size      Free  Type   Mount point
   ------------------------------------------------------------------------------
   /dev/dm-0  dm-crypt     78.12 GB  ext4  78.12 GB  45.01 GB  crypt  /home
   /dev/sda1               19.53 GB  ext4  19.53 GB  12.67 GB  part   /
   /dev/sda5               49.44 GB  ext4  49.44 GB  29.77 GB  part   /mnt/test
   ------------------------------------------------------------------------------

Creating a volume of defined size with the defined file system. The
default back-end is set to lvm and lvm default pool name is lvm_pool:

   # ssm create --fs ext4 -s 15G /dev/loop0 /dev/loop1

The name of the new volume is '/dev/lvm_pool/lvol001'. Resize the
volume to 10GB:

   # ssm resize -s-5G /dev/lvm_pool/lvol001

Resize the volume to 100G, but it would require to add more devices
into the pool:

   # ssm resize -s 25G /dev/lvm_pool/lvol001 /dev/loop2

Now we can try to create new lvm volume named 'myvolume' from the
remaining pool space with xfs file system and mount it to /mnt/test1:

   # ssm create --fs xfs --name myvolume /mnt/test1

List all volumes with file system:

   # ssm list filesystems
   -----------------------------------------------------------------------------------------------
   Volume                  Pool        Volume size  FS      FS size      Free  Type    Mount point
   -----------------------------------------------------------------------------------------------
   /dev/lvm_pool/lvol001   lvm_pool       25.00 GB  ext4   25.00 GB  23.19 GB  linear
   /dev/lvm_pool/myvolume  lvm_pool        4.99 GB  xfs     4.98 GB   4.98 GB  linear  /mnt/test1
   /dev/dm-0               dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
   /dev/sda1                              19.53 GB  ext4   19.53 GB  12.67 GB  part    /
   /dev/sda5                              49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
   -----------------------------------------------------------------------------------------------

You can then easily remove the old volume by:

   # ssm remove /dev/lvm_pool/lvol001

Now lest try to create btrfs volume. Btrfs is separate backend, not
just a file system. That is because btrfs itself have integrated
volume manager. Defaul btrfs pool name is btrfs_pool.:

   # ssm -b btrfs create /dev/loop3 /dev/loop4

Now create we btrfs subvolumes. Note that btrfs file system has to be
mounted in order to create subvolumes. However ssm will handle it for
you.:

   # ssm create -p btrfs_pool
   # ssm create -n new_subvolume -p btrfs_pool


   # ssm list filesystems
   -----------------------------------------------------------------
   Device         Free      Used      Total  Pool        Mount point
   -----------------------------------------------------------------
   /dev/loop0  0.00 KB  10.00 GB   10.00 GB  lvm_pool
   /dev/loop1  0.00 KB  10.00 GB   10.00 GB  lvm_pool
   /dev/loop2  0.00 KB  10.00 GB   10.00 GB  lvm_pool
   /dev/loop3  8.05 GB   1.95 GB   10.00 GB  btrfs_pool
   /dev/loop4  6.54 GB   1.93 GB    8.47 GB  btrfs_pool
   /dev/sda                       149.05 GB              PARTITIONED
   /dev/sda1                       19.53 GB              /
   /dev/sda2                       78.12 GB
   /dev/sda3                        1.95 GB              SWAP
   /dev/sda4                        1.00 KB
   /dev/sda5                       49.44 GB              /mnt/test
   -----------------------------------------------------------------
   -------------------------------------------------------
   Pool        Type   Devices     Free      Used     Total
   -------------------------------------------------------
   lvm_pool    lvm    3        0.00 KB  29.99 GB  29.99 GB
   btrfs_pool  btrfs  2        3.84 MB  18.47 GB  18.47 GB
   -------------------------------------------------------
   -----------------------------------------------------------------------------------------------
   Volume                  Pool        Volume size  FS      FS size      Free  Type    Mount point
   -----------------------------------------------------------------------------------------------
   /dev/lvm_pool/lvol001   lvm_pool       25.00 GB  ext4   25.00 GB  23.19 GB  linear
   /dev/lvm_pool/myvolume  lvm_pool        4.99 GB  xfs     4.98 GB   4.98 GB  linear  /mnt/test1
   /dev/dm-0               dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
   btrfs_pool              btrfs_pool     18.47 GB  btrfs  18.47 GB  18.47 GB  btrfs
   /dev/sda1                              19.53 GB  ext4   19.53 GB  12.67 GB  part    /
   /dev/sda5                              49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
   -----------------------------------------------------------------------------------------------

Now let's free up some of the loop devices so we cat try to add them
into then btrfs_pool. So we'll simply remove lvm mvolume and resize
lvol001 so we can remove /dev/loop2. Note that myvolume is mounted so
we have to unmount it first.:

   # umount /mnt/test1
   # ssm remove /dev/lvm_pool/myvolume
   # ssm resize -s-10G /dev/lvm_pool/lvol001
   # ssm remove /dev/loop2

Add device to the btrfs file system:

   # ssm add /dev/loop2 -p btrfs_pool

Set' see what happend. Note that to actually see btrfs subvolumes you
have to mount the file system first:

   # mount -L btrfs_pool /mnt/test1/
   # ssm list volumes
   ------------------------------------------------------------------------------------------------------------------------
   Volume                         Pool        Volume size  FS      FS size      Free  Type    Mount point
   ------------------------------------------------------------------------------------------------------------------------
   /dev/lvm_pool/lvol001          lvm_pool       15.00 GB  ext4   15.00 GB  13.85 GB  linear
   /dev/dm-0                      dm-crypt       78.12 GB  ext4   78.12 GB  45.33 GB  crypt   /home
   btrfs_pool                     btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1
   btrfs_pool:2012-05-09-T113426  btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1/2012-05-09-T113426
   btrfs_pool:new_subvolume       btrfs_pool     28.47 GB  btrfs  28.47 GB  28.47 GB  btrfs   /mnt/test1/new_subvolume
   /dev/sda1                                     19.53 GB  ext4   19.53 GB  12.67 GB  part    /
   /dev/sda5                                     49.44 GB  ext4   49.44 GB  29.77 GB  part    /mnt/test
   ------------------------------------------------------------------------------------------------------------------------

Remove the whole lvm pool and one of the btrfs subvolume, and one
unused device from the btrfs pool btrfs_loop3. Note that with btrfs,
pool have the same name as the volume:

   # ssm remove lvm_pool /dev/loop2 /mnt/test1/new_subvolume/

Snapshots can also be done with ssm:

   # ssm snapshot btrfs_pool
   # ssm snapshot -n btrfs_snapshot btrfs_pool

With lvm, you can also create snapshots:

   root# ssm create -s 10G /dev/loop[01]
   # ssm snapshot /dev/lvm_pool/lvol001

Now list all snapshots. Note that btrfs snapshots are actually just
subvolumes with some blocks shared with the original subvolume, so
there currently no way to distinguish between those. ssm is using a
little trick to search for name patters to recognize snapshots, so if
you specify your own name for the snapshot ssm will not recognize it
as snapshot, but rather as regular volume (subvolume). This problem
does not exist with lvm.:

   # ssm list snapshots
   -------------------------------------------------------------------------------------------------------------
   Snapshot                            Origin   Volume size     Size  Type    Mount point
   -------------------------------------------------------------------------------------------------------------
   /dev/lvm_pool/snap20120509T121611   lvol001      2.00 GB  0.00 KB  linear
   btrfs_pool:snap-2012-05-09-T121313              18.47 GB           btrfs   /mnt/test1/snap-2012-05-09-T121313
   -------------------------------------------------------------------------------------------------------------


Installation
************

To install System Storage Manager into your system simply run:

   python setup.py install

as root in the System Storage Manager directory. Make sure that your
system configuration meet the *requirements* in order for ssm to work
correctly.

Note that you can run **ssm** even without installation from using the
local sources with:

   bin/ssm.local


Requirements
************

Python 2.6 or higher is required to run this tool. System Storage
Manager can only be run as root since most of the commands requires
root privileges.

There are other requirements listed bellow, but note that you do not
necessarily need all dependencies for all backends, however if some of
the tools required by the backend is missing, the backend would not
work.


Python modules
==============

* os

* re

* sys

* stat

* argparse

* datetime

* threading

* subprocess


System tools
============

* tune2fs

* fsck.SUPPORTED_FS

* resize2fs

* xfs_db

* xfs_check

* xfs_growfs

* mkfs.SUPPORTED_FS

* which

* mount

* blkid

* wipefs


Lvm backend
===========

* lvm2 binaries


Btrfs backend
=============

* btrfs progs


Crypt backend
=============

* dmsetup

* cryptsetup


For developers
**************

We are accepting patches! If you're interested contributing to the
System Storage Manager code, just checkout the git repository located
on SourceForge. Please, base all of your work on the "devel" branch
since it is more up-to-date and it will save us some work when merging
your patches:

   git clone --branch devel git://git.code.sf.net/p/storagemanager/code storagemanager-code

Any form of contribution - patches, documentation, reviews or rants
are appreciated. See *Mailing list section* section.


Tests
=====

System Storage Manager contains regression testing suite to make sure
that we do not break thing that should already work. And we recommend
every developer to run tests before sending patches:

   python test.py

Tests in System Storage Manager are divided into four levels.

1. First the doctest is executed.

2. Then we have unittests in "tests/unittests/test_ssm.py" which is
   testing the core of ssm "ssmlib/main.py". It is checking for basic
   things like required backend methods and variables, flag
   propagations, proper class initialization and finally whether
   commands actually result in the proper backend callbacks. It does
   not require root permissions and it does not touch your system
   configuration in any way. It actually should not invoke any shell
   command, and if it does it's a bug.

3. Second part of unittests is backend testing. We are mainly testing
   whether ssm commands result in proper backend operations. It does
   not require root permissions and it does not touch your system
   configuration in any way. It actually should not invoke any shell
   command and if it does it's a bug.

4. And finally there are real bash tests located in "tests/bashtests".
   Bash tests are divided into files. Each file tests one command for
   one backend and it containing series of test cases followed by
   checks whether the command created the expected result. In order to
   test real system commands we have to create system device to test
   on and not touch any of the existing system configuration.

   Before each test a number of devices are created using *dmsetup* in
   the test directory. These devices will be used in test cases
   instead of real devices. Real operation are performed in those
   devices as it would on the real system devices. It implies that
   this phase requires root privileges and it would not be run
   otherwise. In order to make sure that **ssm** does not touch any
   existing system configuration, each device, poor and volume name is
   include special prefix and SSM_PREFIX_FILTER environment variable
   is set to make **ssm** to exclude all items which does not match
   this filter.

   Even though we tried hard to make sure that the bash tests does not
   change any of your system configuration the recommendation is
   **not** to run tests as with root privileges on your work or
   production system, but rather run it on your testing machine.

If you change or create new functionality, please make sure that it is
covered by the System Storage Manager regression test suite to make
sure that we do not break it unintentionally.

Important: Please, make sure to run full tests before you send a patch to the
  mailing list. To do so, simply run "python test.py" as root on your
  test machine.


Documentation
=============

System Storage Manager documentation is stored in "doc/" directory.
The documentation is build using **sphinx** software which help us not
to duplicate texts for different type of documentation (man page, html
pages, readme). If you are going to modify documentation, please make
sure not to modify manual page, html pages or README directly, but
rather modify "doc/*.rst" and "doc/src/*.rst" files accordingly so the
change is propagated to all documents.

Moreover, parts of the documentation such as *synopsis* or ssm command
*options* are parsed directly from the ssm help output. It means that
when you're going to add or change argument into **ssm** the only
thing you have to do is to add or change it in the "ssmlib/main.py"
source code and then run "make dist" in the "doc/" directory and all
the documents should be updated automatically.

Important: Please make sure you update the documentation when you add or change
  **ssm** functionality if the format of the change requires it. Then
  regenerate all the documents using "make dist" and include changes
  in the patch.


Mailing list
============

System Storage Manager developers communicate via the mailing list.
Address of our mailing list is storagemanager-
devel@lists.sourceforge.net and you can subscribe on the SourceForge
project page https://lists.sourceforge.net/lists/listinfo
/storagemanager-devel. Mailing list archives can be found here
http://sourceforge.net/mailarchive/forum.php?forum_name
=storagemanager-devel.

This is also the list where to send patches and where the review
process is happening. We do not have separate *user* mailing list, so
feel free to drop your questions there as well.


Posting patches
===============

As already mentioned, we are accepting patches! And we are very happy
for every contribution. If you're going to send a path in, please make
sure to follow some simple rules:

1. Before you're going to post a patch, please run our regression
   testing suite to make sure that your change does not break someone
   else work. See *Tests section*

2. If you're making a change that might require documentation update,
   please update the documentation as well. See *Documentation
   section*

3. Make sure your patch have all the requisites such as *short
   description* preferably 50 characters long at max describing the
   main idea of the change. *Long description* describing what was
   changed with and why and finally Signed-off-by tag.

4. If you're going to send a patch to the mailing list, please send
   the patch inlined in the email body. It is much better for review
   process.

Hint: You can use **git** to do all the work for you. "git format-patch"
  and "git send-email" will help you with creating and sending the
  patch.

About

A single tool to manage your storage.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 71.4%
  • Shell 28.0%
  • Makefile 0.6%