Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


This is a very simple static site/blog generator (and my blog itself) for Emacs and org-mode.

  • This static blog/website generator uses only one org file. It tries to utilize org-mode features as much as possible. It assumes as little as possible about how you structure your website.
  • Every header in this file is treated as a blog post by default.
  • Headers tagged with page are considered to be special pages (non-post pages) that can be exported under any path you want.
  • Header properties are utilized to modify exporting behavior.
  • A very simple templating system is used.
    • Templates are basically named org src blocks containing some HTML. You can utilize (and encouraged to use) all the features that org-mode and babel gives you.
    • There are three different templates that you need to provide, just put a named src block somewhere in your org file.
      Post template
      Template that is used while exporting posts. Src block name should be post-template.
      Page template
      Template that is used while exporting non-post pages (headers that are tagged with page tag). Src block name should be page-template.
      Tag template
      Template that is used while exporting pages that list all posts belonging to a tag. Src block name should be tag-template.
      • All of these templates can be overridden by setting :TEMPLATE: property to specific header (except tag-template, because they are not generated from header.)

See the following example org file that has two special pages and two posts. It also contains all the templates and configuration needed to generate the website.

#+TITLE: Your website title
#+AUTHOR: Your name
#+EMAIL: your@email.address
#+STARTUP: overview
#+OPTIONS: html-style:nil num:nil H:4

* My main page :page:
:EXPORT_AS: index
:PUBLISH_DATE: [2016-07-13 Wed]

This is the main page. This is not a post, because it's tagged with
=page=. It will be exported as =index.html=, indicated by =EXPORT_AS=

* Post 1 :tag1:tag2:
:PUBLISH_DATE: [2016-07-13 Wed]
:CUSTOM_ID: post-1

This is a simple blog post. =PUBLISH_DATE= property is required. You
can supply =CUSTOM_ID= or it will be generated automatically while
exporting. Notice that you can also add tags to your posts, just
normal org tags.

* Post 2 :tag2:tag3:
:PUBLISH_DATE: [2016-07-13 Wed]
:OPTIONS: toc:nil
:CUSTOM_ID: post-2

This is a post with more customization. We override/change org's HTML
export options with the help of =OPTIONS= tag here. This post will not
have any table of contents.

* About page :page:
:EXPORT_AS: about
:PUBLISH_DATE: [2016-07-13 Wed]
:TEMPLATE: about-page-template
:CUSTOM_ID: about-page

This is another example page, it will be exported as
=about.html=. This uses a different template, named
=about-page-template= which you need to provide as a src block
somewhere in this document.

* Blog configuration :noexport:
This header is tagged with =noexport=, which means it will be skipped
in the exporting process (including sub-headers). You can utilize this
header to do your configuration.

** Templates
Templates are simple org src blocks. You can use special =${variable}=
or =${(elisp-code)}= syntax in them.

*** Post template example
Notice that src block is named =post-template=.

#+name: post-template
#+begin_src html
    <!-- Following will be replaced with posts title -->
    <title>${title} - My website</title>
      <li> Publish date: ${publish-date}
    <!-- Following will be replaced with org-mode generated HTML content of the specific header -->

*** Page template example
Notice that src block is named =page-template=. I'll simply use
=post-template= again for simplicity. Also notice how we can utilize
org babel features.

#+name: page-template
#+begin_src html :noweb yes
*** Tag page template
This is template that tag-overview (pages that lists all the posts
belonging to a tag) pages are generated. This also demonstrates how
you can utilize elisp within the templates.

#+begin_src html
    <h1>Posts tagged with ${tag-name}</h1>

    <!-- Create a link to RSS feed for this tag -->
    <a href="${(format "/%s/%s.xml" isamert/blog-rss-per-tag tag-name)}">
      RSS feed for ${tag-name} tag

    <!-- Actual list of posts belonging to this tag -->
      ${(--map (format "<li><a href=\"/%s\">%s</li>\n" (plist-get it :path) (plist-get it :title)) posts)}

*** Special template: about-page-template
Remember how we set =TEMPLATE= property to the [[About page]] as
=about-page-template=. Now we are defining the template:

#+name: about-page-template
#+begin_src html

My configuration

General configuration

Just setting the port and binding it into an org-variable. I’ll utilize this later in index.js.

(setq isamert/blog-local-port 3000)

Static files


This is the file that contains all the dynamic logic for my website. I’m trying to keep it minimal as possible.

if (location.port !== <<blog-local-port()>>) {
    // Simple, privacy friendly analytics

document.addEventListener('DOMContentLoaded', () => {

function addLinksToHeaders() {
  document.querySelectorAll('h1, h2, h3, h4, h5, h6').forEach(h => {
    if (!h.hasAttribute('id')) {

    wrap(h, elem('a', {
      class: 'clear',
      href: '#' +,

function highlightCodeBlocks(_event) {
  // Disable auto-lang detection
  hljs.configure({languages: []})

  let pageLang

  // Higlight all code blocks
  document.querySelectorAll('pre.src').forEach(block => {
    const lang = [...block.classList].find(x => x.startsWith('src-'))
    if (lang) {
      const currLang = lang.split('-')[1]
      if (currLang) {
        pageLang = currLang.replace(/elisp/g, 'lisp')

  // Highlight all inline code blocks
  document.querySelectorAll('code').forEach(block => {
    if (pageLang) {

// Utils

function wrap(elem, wrapper) {
  elem.parentNode.replaceChild(wrapper, elem)

function elem(type, attrs) {
  const e = document.createElement(type)
  Object.keys(attrs).forEach(attr => {
    if (attr !== 'children') {
      e.setAttribute(attr, attrs[attr])

  if (attrs.children) {
    attrs.children.forEach(child => e.appendChild(child))

  return e


The one and only css file that I use for my blog. Also trying to keep it minimal.

/* Fonts */
/* TODO: Maybe I should serve these, instead of using a cdn */
@import url('');
@import url(',wght@0,400;0,700;1,400;1,700&display=swap');
@import url(';700&display=swap');
@import url('');

:root {
  --font: 'Noto Serif', serif;
  --font-color: #545759;
  --light-font-color: #808486;
  --muted-color: #c7c7c7;

  --monospace-font: 'TypoPRO Iosevka Term', monospace;
  --src-block-bg-color: #f3f2ee;
  --inline-src-block-bg-color: #f8f7f3;

  --header-font: 'Roboto Slab', serif;
  --header-color: #2b2e30;
  --post-title-font-size: 2em;
  --post-first-header-font-size: 1.7em;
  --post-second-header-font-size: 1.5em;
  --post-third-header-font-size: 1.3em;
  --post-fourth-header-font-size: 1.2em;

  --link-color: #5f9b65;
  --link-hover-color: #808486;

body {
  font-family: var(--font);
  color: var(--font-color);
  margin: 0;
  padding: 0;

header {
  position: fixed;
  width: 100%;
  top: 0;
  background-color: #f8f7f3;
  padding: 1rem 3.5rem;
  display: block;
  box-shadow: 3px 3px 2px #aaaaaa;
  display: flex;
  justify-content: space-around;

header > .site-title {
    font-weight: bold;

section {
  margin-top: 3.5rem !important;
  margin: 3rem auto;
  max-width: 46rem;
  line-height: 1.5;
  padding: 0 10px;

footer {
  max-width: 46rem;
  margin-right: auto;
  margin-left: auto;

footer > p {
  text-align: left;

footer > p > span {
   float: right;

h1, h2, h3, h4, h5, h6, h7 {
  font-family: var(--header-font);
  line-height: 1.35;
  color: var(--header-color);

h1 {
  font-size: var(--post-title-font-size);
  border-bottom: 1.7px dashed var(--light-font-color);
  margin-bottom: 1rem;

h2 {
  font-size: var(--post-first-header-font-size);
  border-bottom: 1.5px dashed var(--light-font-color);

h3 {
  font-size: var(--post-second-header-font-size);
  border-bottom: 1px dashed var(--light-font-color);

h4 {
  font-size: var(--post-third-header-font-size);

h5 {
  font-size: var(--post-fourth-header-font-size);

h1:hover, h2:hover, h3:hover, h4:hover {
  color: var(--light-font-color);
  cursor: pointer;

blockquote {
  border-left: 1.4px solid var(--light-font-color);
  margin: 0;
  margin-left: 1rem;
  padding: 0 0 0 20px;
  font-style: italic;

a {
  color: var(--link-color);
  text-decoration: none;

a:hover {
  color: var(--link-hover-color);
  text-decoration-color: var(--link-hover-color);

.org-dl dt {
    font-weight: bold;
    font-style: italic;

/* Inline codes */
code {
  font-family: var(--monospace-font);
  /* font-size: 0.7em; */
  background: var(--inline-src-block-bg-color) !important;
  border-radius: 0.4rem !important;
  padding: 0.24rem !important;

 * Make code blocks in paragraphs inline.
 * hljs turns them into a fully-fledged code block. We don't want that.
code {
  display: inline !important;

hr {
  border: 0;
  background: var(--muted-color);
  height: 1px;

/* Code blocks */
.src, .example {
  /* font-size: 0.85em; */
  font-family: var(--monospace-font);
  background: var(--src-block-bg-color);
  padding: .4rem .7rem !important;
  border-radius: 0.3rem !important;
  display: block !important;

.post-information {
  display: flex;
  justify-content: space-between;

.post-information ul {
  padding: 0;
  margin: 0;

.post-information ul:nth-child(2) {
  text-align: right;

.post-information li {
  list-style-type: none;
  position: relative;

/* Center images and fit into the page */
.centered {
  margin: 20px auto 20px;
  display: block;
  max-width: 100%;

.clear {
  color: inherit;
  text-decoration: inherit;


Post template

This is the post template, post pages will be generated based on this.

<!DOCTYPE html>


        <div class="post-information">
            <li>Author: ${author}</li>
            <li>Tags: ${(isamert/create-tag-list tags)}</li>

            <li>Published at <em>${(isamert/org-date-to-iso publish-date)}</em></li>
            <li>Last updated at <em>${(isamert/org-date-to-iso (or update-date publish-date))}</em></li>

        <hr />


        <hr />

        <script src=""
                label="> 💬"


Page template

Headers tagged with page tag will be generated based on this template.

<!DOCTYPE html>



Tag template

Pages that list posts belonging to a particular tag will be generated based on this template.

<!DOCTYPE html>

        <h1>Posts tagged with ${tag-name}</h1>
        <a href="${(format "/%s/%s.xml" isamert/blog-rss-per-tag tag-name)}">
          RSS feed for this tag
          ${(--map (format "<li><a href=\"/%s\">%s</li>\n" (plist-get it :path) (plist-get it :title)) posts)}



Components that I use in my templates.


Generic head portion that I use in every template.

  <title>${title} |</title>

  <!-- Privacy friendly simple analytics -->
  <script src=""></script>

  <script src="/assets/index.js"></script>
  <link rel="stylesheet" href="/assets/main.css">

  <link rel="stylesheet" href="/assets/hljs/solarized-light.css">
  <script src="/assets/hljs/highlight.pack.js"></script>

  <link rel="alternate" type="application/rss+xml" href="" title=" RSS feed">


    <a href="/index.html" class="site-title"></a>
      <a href="/about.html"><i class="fa fa-user"></i> About</a>
      <a href="/feeds.html"><i class="fa fa-rss"></i> Feeds</a>


  <hr />
    Isa Mert Gurbuz © 2022

      <a href="">Source</a>


All posts


Here are some cool stuff that you can check out:

Türkçe yazılar (Turkish posts)


Every category has it’s own RSS feed, you can subscribe them separately or you can just subscribe for all posts using the main feed.

Feeds for tags

In no specific order:


About me

I am a software developer, living in Istanbul. I write and publish all of my work as free/libre software. Here are a few links for you to get me know better:

About this web site

I generally write on technical topics like programming languages, algorithms and mostly on workflow automatization, command line applications, GNU/Linux, Emacs and org-mode. I try to publish somewhat longer posts with some substance but lately I’ve been trying to get in the habit of posting occasional tidbits, snippets, small tips and tricks which I appreciate when I see these kinds of posts in other blogs. I also want to curate other blogs or links that I find useful somewhere here, someday soon.

I have other planned things for this web site, like the one I just mentioned above. Other than that I’m planning to add some pages containing stuff like:

  • Interactive list of movies/books/articles I have read and rated. Maybe along with some commentary.
  • Computer programs that I use, my use cases links to my configurations (I already have a dotfiles repo but I want to move the contents here)
  • Constantly updated everyday life tips, rule of thumbs etc.
  • List of low tech (or high tech too) gadgets that I use and find helpful.

I have been gathering information on these topics but I need to find a way to represent them in my website and also need to go over my notes and rephrase them for public consumption.

My projects

I maintain a few projects and occasionally contribute to other free software projects. Following list may be incomplete (please take a look at my my GitHub profile if you want to see my dead/half-baked projects) but here are the projects that I’ve been working on:

Command line applications

A TUI application that let’s you use Signal from command line. Now it is mostly (99% percent) maintained by @exquo.
A resource opener with a DSL, alternative to xdg-open. It let’s you specify highly complex use-cases in a very simple form for opening your resources exactly as you want.
A grep-like tool (or more like a search engine) for Markdown and Org mode documents. Development is slow but I will return to this project whenever I have a bit more spare time.

Emacs related projects

An Emacs media player, media library manager, radio player and YouTube frontend.
An Emacs/org-mode watchlist manager and OMDb API client.
A package that integrates GitLab with Emacs, provides you simple interfaces for interacting with GiLab, right inside Emacs!
A Swagger UI for Emacs.
An helper for inserting JSDoc comments easily within Emacs.
Inferior mode for cbq
Online Turkish dictionary. Shows the results in a nicely formatted org mode buffer.

Experimental projects
An experimental scheme interpreter written in Rust. This project came into life in an attempt to learn Rust and a bit about interpreters. There is also an online WebAssembly version published by Niklas Reppel, check it out here.
Another attempt at learning compilers/interpreters. This time I went ahead and designed my language. It has a mix of ML-style and C-style syntax. It has couple of interesting and novel ideas (it turns out those ideas are already implemented by Scala, Clojure and D lang but I guess it is first language that brings them together.) I still plan to work on this and make it at least usable for real-life scripting.

Other projects
This web site itself. It’s simply one file that contains the all website contents, the code that generates the static file based on the contents and the documentation of the generator. It can be abstracted away in a sense that it can be an Emacs web site generator package but you can also copy the file into an org-mode buffer and start using it.
All of the configurations for the programs that I use and lot’s of automation code. There are some modules that I want to turn into separate Emacs packages, like:


Please do :). I’m a little bit slow in terms of responding but I always do. You can email me regarding any subject that you think I’ll be interested and I’ll get back to you.

If you are interested with my projects and want to contribute/maintain, please don’t hesitate to contact as I am pretty willing to accept contributions or even share/defer the maintenance.

isamertgurbuz at gmail dot com

Median cut algorithm in C++/Qt

I needed a simple color quantization algorithm for my project. I didn’t want to use any other program/library for this simple job. So I implemented median cut with Qt. I just used the explanation of the algorithm in Wikipedia, I didn’t make any other research, so the code is not well optimized but it just works. I’ll try to explain step by step:

We have an image with an arbitrary number of pixels and want to generate a palette of X colors. The very first thing we need to is putting all the pixels in a list. By pixels, I mean their RGB data. Then we need to find the color channel(red, green, blue) that has the most wide range. Let’s implement this:

QString filePath = "some_image.png";
int color_count = 256; // The color count that we want to reduce our image.

QList<QRgb> pixels;
QImage img(filePath);

// For finding color channel that has the most wide range,
// we need to keep their lower and upper bound.
int lower_red   = qRed(img.pixel(0, 0)),
    lower_green = qGreen(img.pixel(0, 0)),
    lower_blue  = qBlue(img.pixel(0, 0));
int upper_red   = 0,
    upper_green = 0,
    upper_blue  = 0;

// Just loop trough all the pixels
for (int x = 0; x < img.width(); ++x) {
    for (int y = 0; y < img.height(); ++y) {
        QRgb rgb = img.pixel(x, y);         // Get rgb data of a particular pixel
        if (!pixels.contains(rgb)) {        // If we have the same pixel, we don't need it twice or more
            lower_red = std::min(lower_red, qRed(rgb));
            lower_green = std::min(lower_green, qGreen(rgb));
            lower_blue = std::min(lower_blue, qBlue(rgb));

            upper_red = std::max(upper_red, qRed(rgb));
            upper_green = std::max(upper_green, qGreen(rgb));
            upper_blue = std::max(upper_blue, qBlue(rgb));

We have upper bounds and lower bounds of the color channels, so just find out the one that has widest range:

int red = upper_red - lower_red;
int green = upper_green - lower_green;
int blue = upper_blue - lower_blue;
int max = std::max(std::max(red, green), blue);

Then we need to short our pixels list according to the channel we just found out. For example, if the blue channel has the greatest range, then a pixel with an RGB value of (32, 8, 16) is less than a pixel with an RGB value of (1, 2, 24), because 16 < 24.

qSort(pixels.begin(), pixels.end(), [max,red,green,blue](const QRgb& rgb1, const QRgb& rgb2){
    if (max == red)  // if red is our color that has the widest range
        return qRed(rgb1) < qRed(rgb2); // just compare their red channel
    else if (max == green) //...
        return qGreen(rgb1) < qRed(rgb2);
    else /*if (max == blue)*/
        return qBlue(rgb1) < qBlue(rgb2);
// We just used qSort here.
// As comparison function, we sent a lambda function
// that compares two rgb color according to our selected color channel.

After sorting our list, we need to move the upper half of the list to another list, then we have two list. For these two list, we will do the same thing until we get X lists (So if we want to reduce our color palette to 16 colors, we need to repeat this step until we get 16 lists.).

QList<QList<QRgb>> lists;
int list_size = pixels.size() / color_count;

for (int i = 0; i < color_count; ++i) {
    QList<QRgb> list;
    for (int j = list_size * i; j < (list_size * i) + list_size; ++j) {

We got our lists. After that, we can get the average of each list and we can build our X colored palette or we can just get the median of each list. I didn’t observe so much difference, so I’m going with the easy one.

QVector<QRgb> palette;
for (QList<QRgb> list: lists) {
    palette.append( / 2));

We build up our X color palette. The next thing I am going to do is convert our original image color palette to our new palette. Actually there is a Qt function for that but it has a bug.(I’ll explain it later) So we need to implement this.

QVector<QRgb> palette;
for (QList<QRgb> list: lists) {
    palette.append( / 2));

QImage out(img.width(), img.height(), QImage::Format_ARGB32);
for (int x = 0; x < img.width(); ++x) {
    for (int y = 0; y < img.height(); ++y) {
    out.setPixel(x,y, palette[closestMatch(img.pixel(x, y), palette)]);

In this piece of code, we just create a QImage that has same size of our original image and format. Then we loop through all the pixels in our original image and find the closest color from our new palette then set that color to corresponding pixel of our new QImage object. And that’s it.

There is one function that needs explanation in this code, closestMatch. I just took it from the Qt source code. Actually, QImage has a function named convertToFormat. You can use this function to change the format of your image and also it lets you to change color palette of your image. The function definition goes like this: QImage QImage::convertToFormat(Format format, const QVector<QRgb> &colorTable, Qt::ImageConversionFlags flags = Qt::AutoColor) const and it’s definition says:

Returns a copy of the image converted to the given format, using the specified colorTable. Conversion from 32 bit to 8 bit indexed is a slow operation and will use a straightforward nearest color approach, with no dithering.

So we can simply use this function to convert any image using our palette. But there is a one problem, if you don’t want to change your image format(so your source and output image has the same format), it just simply returns the image itself without converting to our palette. So I extracted the part that it finds the closest color to given color from a vector:

static inline int pixel_distance(QRgb p1, QRgb p2) {
    int r1 = qRed(p1);
    int g1 = qGreen(p1);
    int b1 = qBlue(p1);
    int a1 = qAlpha(p1);

    int r2 = qRed(p2);
    int g2 = qGreen(p2);
    int b2 = qBlue(p2);
    int a2 = qAlpha(p2);

    return abs(r1 - r2) + abs(g1 - g2) + abs(b1 - b2) + abs(a1 - a2);

static inline int closestMatch(QRgb pixel, const QVector<QRgb> &clut) {
    int idx = 0;
    int current_distance = INT_MAX;
    for (int i=0; i<clut.size(); ++i) {
        int dist = pixel_distance(pixel,;
        if (dist < current_distance) {
            current_distance = dist;
            idx = i;
    return idx;

Kotlin function application

I often write some code like this:
val result = someData.split(...)
    .map { ... }
    .filter { ... }
    .reduce { ... }


As you can see last line of the code is breaking the beautiful flow of chained functions. One can rewrite this as:

    .map { ... }
    .filter { ... }
    .reduce { ... }

Which seems better to me but not as good as this:

    .map { ... }
    .filter { ... }
    .reduce { ... }

I don’t know if there is a standard way of doing this but here is my solution:

infix fun <T, R> T.apply(func: (T) -> R): R = func(this)

So this extension function applies its object to the function that it took as an argument and returns the result of application. You can use it as an infix operator, if you want to:

    .map { ... }
    .filter { ... }
    .reduce { ... }
    .... apply ::someFunction

You can even chain function applications:

    .map { ... }
    .filter { ... }
    .reduce { ... }
    .apply { fun4(it) }

Which is same as:

    .map { ... }
    .filter { ... }
    .reduce { ... }
    .... apply ::fun1 apply ::fun2 apply ::fun3 apply { fun4(it) }

Also this code is equivalent of this one:

val result = someData.split(...)
    .map { ... }
    .filter { ... }
    .reduce { ... }


Programming AVR microcontrollers in Linux

The Windows way of doing that is just using ATMEL Studio but we don’t have it in Linux. As a customization freak, I’ll just write the steps of how to compile and flash your program to an AVR microcontroller and leave the rest for you. So integrating this steps into your favorite IDE, if you are using one, is your job.


These are the tools that we need to install, just pull them from your package manager (These package names exists in Arch Linux repos, they might differ in other distros repositories): - avr-gcc GNU C compiler for AVR architecture - avr-libc AVR libraries - avr-binutils Some AVR tools, we need it to create hex files from compiled programs, because avrdude needs a hex file instead of a binary to flash. - avrdude A dude that is required to perform flashing


  1. Write your program. Let’s say you named it main.c.
  2. Compile it.
    avr-gcc main.c -Os -Wall -mmcu=atmega32 -o main_bin
    • Change -mmcu from atmega32 to your devices name. You can find your devices MCU from here.
  3. Convert your program to hex from binary.
    avr-objcopy -j .text -j .data -O ihex main_bin "main.hex"
  4. Flash it.
    avrdude -c usbasp -p m32 -U flash:w:"main.hex"
    • Here you can see -p option. You need to specify it according to your device. The list is here.
    • Also here you can see -c option. It specifies programmer type. In my case it’s usbasp. So you should change it to whatever you are using. Here is the list of programmer that avrdude accepts. (If your programmer isn’t in the list, which is probably not the case, you can specify your programmer as shown in the same page and save it to a ini file. Then add -C option that points the ini file you just write.)

The correct way of using avrdude

When you do the last step, you will get an error that says you don’t have permissions. You can just run avrdude with sudo and it will work this time. But of course this is not the preferred way to do it. What you need to do is write an udev rule so we can access programmer without root privileges.
  1. Create this file: /etc/udev/rules.d/55-avr-programmer.rules
  2. Write this into file:
    # USB-ASPcable
    ATTR{idVendor}=="16c0", ATTR{idProduct}=="05dc", GROUP="plugdev", MODE="0666"~
    • Again, as you can see this configuration is for my programmer, usbasp. You need to change idVendor and idProduct according to your device. To find these values, just run lsusb (If you are using usb extender cord or something like that, it is possible that lsusb might not display your device. Just connect your programmer directly to your PC if that is the case):
      > lsusb
      Bus 003 Device 010: ID 16c0:05dc Van Ooijen Technische Informatica shared ID for use with libu
    • In sixth column, you can see your devices vendor id and product id in this format VENDOR_ID:PRODUCT_ID. So edit your file according to this information.
  3. You may restart your computer or just use these commands to reload udev rules:
    $ sudo udevadm control --reload-rules
    $ sudo udevadm trigger=
    • You may need to unplug your programmer and plug it back. From now on you can use avrdude without needing root privileges.

Functional programming in C++

C++ enables you to do nearly everything with every possible paradigm. I actually consider it as a huge mess or maybe I’m the one that can not comprehend that much stuff. Considering C++ is made by people smarter than me, probably the latter is true.

So trying to use C++ as a purely functional programming language is probably possible but pointless in all cases except having some fun. More acceptable strategy may be using it as functional but not so pure language like Scala(or something like that). But then the question arises, why not use a language that is designed for that from scratch? Many answers can be given to this question but the most obvious ones goes like this:

  • You hate C++ but you need to write some C++.
  • You love C++ and looking for better paradigms to use in your programming.
  • You are neutral towards C++ and too lazy to learn another language from scratch, so you decided to go with C++. But you are not that lazy to learn a new paradigm.
  • Other combinations involving love-hate relationship with C++.

There are a lot of tutorials on this subject but they sometimes go too extreme or they are too specific. I’ll try to give you a general idea about how functional programming can be done using C++. These things generally dependent on new C++ features so I’ll put an indicator to everything that shows which feature aims which version of C++. Of course it’s probably possible to implement some of those features for earlier versions but I’ll just stick with the easiest and most recent implementations. And if some feature takes too much to implement, I’m not even going to mention it. Also, I’m not advocating usage of persistent (immutable) data structures because it’s either cumbersome to use them or they are inefficient. At the end of the day we are using C++ and lets keep it multi-paradigm. Think this tutorial as “zero-cost paradigm changes that you can apply to your daily C++ programming”.

First things

Use auto at all costs (C++11)

auto is just great. It makes your code faster and shorter. Consider this example (I took this example from Effective Modern C++ by Scott Meyers):
std::unordered_map<std::string, int> m;
// ...
for (const std::pair<std::string, int>& p : m) {
   // ...

The problem with this code is that std::pair<std::string, int> is not the type of an element in a std::unordered_map<std::string, int>. Its actually std::pair<const std::string, int>. So in each iteration, this type conversion creates some overhead. Solution is easy and elegant. Just use auto:

std::unordered_map<std::string, int> m;
// ...
for (const auto& p : m) {
   // ...

Not only we get rid of the overhead, we also have a shorter code. And considering we will use a lot of types involving templates and stuff, auto will save us from a lot of typing.

Try not to deal with manual memory management (C++11)

Another core thing about functional programming is that you just tell computer what to do, not how to do it. So do not deal with the memory management manually, try to leave this job to compiler.
  • Just use stack allocated objects instead of heap allocated objects as much as you can(See this Q&A for more information/explanation).
  • If you need a pointer for real, use smart pointers.
  • Use move semantics. Here is a great slide about what you need to do in nutshell.


Higher order functions

This is the fundamental idea of functional programming, passing functions as arguments to other functions, returning functions from functions. Before C++11 you could achieve such things by using function pointers or maybe using call operator(function objects). But now we have std::function and lambdas. Consider this code that shouts a given string:
#include <iostream>
#include <string>

int main() {
    std::string str = "oh, hi mark";

    // Turn all chars to upper
    for (auto & c: str)
    c = toupper(c);

    // Add some exclamation marks
    str = str + "!!!";

    std::cout << str << std::endl;

Lets make this shouting a function so we can reuse it.

#include <iostream>
#include <string>

std::string shout(std::string str) {
    for (auto & c: str)
    c = toupper(c);

    str = str + "!!!";
    return str;

int main() {
    std::string str = "oh, hi mark";
    std::cout << shout(str) << std::endl;
    // Now we can shout as much as we want.
    std::cout << shout("you are tearing me apart Lisa") << std::endl;

Now think that we are going to use that shout function only in our main function. So it’s cumbersome to add it to header and stuff. Here lambdas are coming into play:

#include <iostream>
#include <string>

int main() {
    auto shout = [](std::string str){
    for (auto & c: str)
        c = toupper(c);
    return str + "!!!!";

    std::cout << shout("oh, hi mark") << std::endl;
    std::cout << shout("you are tearing me apart Lisa") << std::endl;

Problem solved. Lambdas are much more complex than this. They have a lot features. If you don’t know about lambdas, check this link out and also check this link out to see what C++14 and 17 brings for lambdas. Especially generic lambdas which is a C++14 feature will help you a lot:

auto genericAdd = [](auto x, auto y){ return x+y; };
std::cout << "4+12=" << genericAdd(4, 7) << std::endl;
std::cout << "4.0+12=" << genericAdd(4.0, 7) << std::endl;
std::cout << "\"Hello \"+\"world!\"=" <<
         genericAdd(std::string("Hello "), std::string("world!")) << std::endl;

One other benefit of using lambdas is that you can send them as parameters to <algorithm> functions. STL has some great functions which I’ll talk about later in this tutorial.

#include <algorithm>


std::vector<int> vec = {4, 8, 15, 16, 23, 42};

// Print the minimum element
auto min = std::min_element(vec.begin(), vec.end());
std::cout << min << std::endl;

// Print elements greater than 20
auto printIfGreaterThan20 = [](int elem){
    if (elem > 20)
        std::cout << elem << std::endl;

std::for_each(vec.begin(), vec.end(), printIfGreaterThan20);

// Find elements greater than 20 and copy them into vec2
std::vector<int> vec2;
std::copy_if(v.begin(), v.end(), std::back_inserter(vec2), [](int x){ return x > 20; });

// Doing the same thing again but instead of our comparator function, just use another STL function
std::vector<int> vec3;
std::copy_if(vec.begin(), vec.end(), std::back_inserter(vec3),
          std::bind(std::greater<int>(), std::placeholders::_1, 20));

I’ll talk about std::bind and placeholders in a bit. But here is a complete list of <algorithm> functions.

Partial Application and Currying

There is a function called std::less(x,y) which compares two comparable and returns true if x<y or false otherwise. You can use this function as your comparator function for sorting algorithms for example.
std::vector<int> vec = {42, 4, 15, 8, 23, 16};
std::sort (vec.begin(), vec.end(), std::less<int>());
for(auto i: vec)
    std::cout << i << ", ";
// Prints 4, 8, 15, 16, 23, 42

What if you want to use std::less as comparison function for std::remove_if? Lets say we want to remove numbers lower than 22 from our list. Of course we can write a lambda function like this and use it as our predicate function:

[](int x) {return x < 22;}

But instead of writing our function, we want to use std::less. If we look the signature of std::remove_if, it requires an UnaryPredicate but obviously std::less is a BinaryPredicate. What we need to do is partially apply 22 to std::less:

using namespace std::placeholders;
auto lowerThan22 = std::bind(std::less<int>(), _1, 22); // Partial application using std::bind
std::vector<int> vec = {4, 8, 15, 16, 23, 42};
vec.erase(std::remove_if(vec.begin(), vec.end(), lowerThan22), vec.end());

As you can see, using std::bind function we bind the second argument of std::less to 22. As first argument, we sent a placeholder _1 which is actually just std::placeholders::_1. After partial application std::less(x,y) function turned into something like this: std::less(x, 22). So we partially applied some argument to a binary function and it turned into an unary function. Now it only needs one argument to work.

However there is no out of the box support for currying and implementing it is not that easy. So I’ll just leave a great SO answer here. You can learn what currying is and learn how can you implement it in C++11/14/17.


Folding is reducing a some data structure to a single variable with a given operator. For more information, take a look at here. I’m going to inspect folding in 2 categories:

1. Folding STL containers

std::accumulate is the way. There are 2 definitions of std::accumulate which are:
  • std::accumulate(first, last, initial_value)
  • std::accumulate(first, last, initial_value, binary_operator)

First one uses + operator as default binary_operator. Look at these examples:

std::vector<int> v = {1,2,3,4,5};

// Get sum of the vector:
int sum1 = std::accumulate(v.begin(), v.end(), 0); // 0 as initial value
// sum1 is 15

// Multiply every element by 2 while summing them
int sum2 = std::accumulate(v.begin(), v.end(), 10, [](int x, int y) { return x + (2*y) });
// sum2 is 40 (care the initial value)

// Again, you can use STL functions as BinaryOperator
int result = std::accumulate(v.begin(), v.end(), 50, std::minus<int>());
// result is 35 (care the initial value)

// Folding boolean values
std::vector<boolean> bs = {true, true, false, true};
bool allTrue = std::accumulate(bs.begin(), bs.end(), true, std::logical_and);
bool anyTrue = std::accumulate(bs.begin(), bs.end(), false, std::logical_or);
// Care that these last two doesn't do short-circutting

// These does short-circutting
bool allTrue = std::all_of(bs.begin(), vec.end(), [](bool x) { return x; } );
bool anyTrue = std::any_of(bs.begin(), vec.end(), [](bool x) { return x; } );

2. Folding arbitrary number of arguments

C++11 has a thing called variadic templates which enables you to do write such functions that can take arbitrary number of template parameters.
// The `auto` usage here is a C++14 feature.
// You can define a template and make this base case for only one element
// and get the return type from template for making this function C++11 compatible.
auto sum() {
    return 0;

// Again, use `First` as return type instead of `auto` to make this C++11 compatible.
template<typename First, typename... Rest>
auto sum(First first, Rest... rest){
    return first + sum(rest...);

// Usage:

So you can create functions that can take arbitrary number of arguments and fold them. What you need to do is just write your function in recursive way and define a base case(or other needed recursion rules). But even better, C++17 has variadic folds, which makes this process easier with handling the base case in itself.

template<typename ...Args>
auto sum(Args ...args) {
    return (args + ... + 0);

Here is a great tutorial about variadic templates of C++11. Here you can learn more about parameter packs.

Sum types (std::variant) (C++17)

Sum types are very cool and useful. Basically a sum type is just only one type out of a set of possible types. To be more concrete, I’ll give an example: Let’s say you have SoundFile, ImageFile and VideoFile. So a file object can be SoundFile or ImageFile or VideoFile. Defining your file object as a sum type of these types gives you a lot of flexibility and type safety. See this example:
struct File { std::string path; };
struct SoundFile : File { };
struct ImageFile : File { };
struct VideoFile : File { };

int main() {
    std::variant<SoundFile, ImageFile, VideoFile> file;
    // file object can be one of these three

    file = ImageFile(); // Now file is ImageFile

    // To get the content of the variant
    ImageFile f2 = std::get<ImageFile>(file);
    SoundFile f2 = std::get<SoundFile>(file); // This line throws std::bad_variant_access, because file object contains ImageFile, not SoundFile

In practice, we don’t blindly try to get content of the variant. Better way to get the content is using a visitor and pattern match against all possible types. First we need to define a visitor and do the pattern matching using std::visit.

struct FileVisitor {
    void operator()(const SoundFile& if) const { std::cout << "A sound file!" << std::endl; }
    void operator()(const ImageFile& if) const { std::cout << "An image file!" << std::endl; }
    void operator()(const VideoFile& vf) const { std::cout << "A video file!" << std::endl; }

    void operator()(const auto& f) const { std::cout << "Something else?!?!" << std::endl; }
    // We know for sure that our file object either one of three types that we defined above.
    // But we may end up adding another type to our variant, something like TextFile, and we
    // may forget to update our visitor. In this case, this last pattern will match and save us.

    // There is also another use case for this auto capture. For example you may want to play
    // the sound of the file if it's a SoundFile otherwise you may want just display the file's
    // path. In this case you will only pattern match for SoundFile and the rest will be handled
    // by the auto capture.

// Now you can use std::visit
std::visit(FileVisitor(), file);

The problem with this approach is that it cannot capture state. The better way is using lambdas:

template<class... Ts> struct overloaded : Ts... { using Ts::operator()...; }:
template<class... Ts> overloaded(Ts...) -> overloaded<Ts...>;

std::visit(overloaded {
    [](const SoundFile& sf) { std::cout << "Playing the sound..." << ' '; },
    [](const auto& other) { std::cout << other.path << ;},
}, file);

Still a bit verbose but at least its in-place and more useful thanks to lambdas.


Here I’m not talking about function objects, I’m talking about Functors as described here. There are several libraries that provides some kind of Functor/Monad types but again I’ll just talk about the built-in functors that you can start using immediately.

In case you don’t know about functors; a functor is a mapping that preservers the structure between two categories. More concretely, functors gives you the ability to make some transformation on some structure without exposing its contents to the public. What I mean by “exposing its contents to the public” is iterating over the structure if it’s a container or dereferencing it if it’s a pointer etc.

For example, everytime you need to apply some function to a vector, you need to loop through it, apply the function to every individual element then put those elements back to a vector. Another example would be a pointer. Lets say you have a pointer to an int and a function that requires an int as input. To apply this function to your pointer, firstly you need to dereference it and then apply the function. Afterwards you need to wrap the result in a pointer again.

STL Containers as Functors

Functors needs a some kind of a helper function to apply the transformation function to the structure. For STL containers, this helper function is std::transform.
std::vector<int> xs = {1, 2, 3, 4};

std::vector<int> squared_xs;
std::transform(xs.begin(), xs.end(), std::back_inserter(squared_xs), [](int x){ return x^2; });
// squared_xs is now {1, 4, 9, 16}

We applied the lambda function to xs without exposing the inner data structure.

std::optional as Functor (C++17)

std::optional is a type for representing situations that there can be a value or not. For example std::optional<int> x means that x can contain an integer or it may contain nothing at all. Of course one can use pointers for such situations but you don’t want to deal with memory allocation and other bad stuff that comes with pointers for this trivial problem. Check these links out to learn more use cases about std::optional: link1, link2.

std::optional does not come with a helper transformation function. There is a very nice proposal that I came across but I don’t know its current status. So lets just write our transformation function for std::optional, its fairly trivial to implement. To understand it, look at this pseudocode first:

// We have an optional that wraps type T.
// We also have a function that takes a T and returns R.
// So what we want to do is somehow apply this function to optional<T>.
// To do that, we just extract the value from optinal and supply that
// value to the function. Then we wrap the result to optional.

optional<R> transform(optional<T> opt, (T -> R) func) {
    if (opt.has_value())
    return optional(func(opt.value()))
    return optional_empty;

The C++ version with some simplifications:

template <typename T, typename F>
auto transform(const std::optional<T>& opt, F&& f) -> std::optional<decltype(f(*opt))> {
    using ResultType = std::optional<decltype(f(*opt))>;
    return (opt) ? ResultType(f(*opt)) : std::nullopt;

Now we can take any function that has a type of T -> R and apply this function to our optional type using our transform function. Consider this:

std::optional<int> x = 3;
auto plus_3 = [](int x){ return x + 3; };

auto y = transform(x, plus_3); // y is an optional<int> and has value of 6
auto z = transform(transform(y, plus_3), plus_3); // z is an optional<int> and has value of 12

So this is great, we can use functions with std::optional even though they do not know anything about std::optional with help of transform function.

Pointers as Functors

Let’s say given a std::unique_pointer<int> you want to get std::unique_pointer<std::string> which represents the text version of that int. Assume that your conversion function has this signature: std::string convert(int number). So again, you need the unpack the integer from unique_pointer and apply this function and wrap it into unique_pointer back. But as you know we can use functors to solve this unpacking problem. See this code:
template<class T, class F>
auto transform(std::unique_ptr<T> opt, F&& f) -> std::unique_pointer<decltype(f(*opt))> {
    using ResultType = std::unique_ptr<decltype(f(*opt))>;
    return ResultType(f(*opt));

This is the transformation function for pointers. Notice the similarity with the optional transformation function. Dereferencing a pointer and getting the value of a optional has the same * syntax by coincidence. Now we can do something like this:

std::unique_pointer<int> number;
std:unique_pointer<std::string> result = transform(number, convert);

Taking functors a bit further

As you may have noticed, functors does this: You have a variable of type B<A> and a function of type C function(A) (a function that takes A as argument and returns C) and you want to get B<C>. What functors does is handling all the unwrapping and wrapping for you.

But what if you have a variable of type B<A> and a function of type B<C> function(A) and you want to get B. A more concrete example would be this: You have a std::optional<std::string> and a function that converts the given string to corresponding integer. Assume the function returns an std::optional<int> instead of just straight int, because the conversion may fail and we want to handle everything properly. Again, what you need to do is get string value from our optional variable. So now you have a straight std::string and now you can apply the conversion function to that string. As what we did with functors, we can generalize this pattern into a function which handles the unpacking for us. This function is called monadic bind in functional programming. This could be an easy exercise.

Nice little curl commands

Here are some curl friendly web services that you can use in your terminal:


  • curl Displays a nice weather report.
    • You can also specify city-code like this:
    • If the output is too long for your terminal, just use it with less: curl | less -R


  • curl Simply shows your public ip.
  • curl Prints a formatted JSON that contains information about your ip.


  • curl -F'file=@yourfile.png' Uploads specified file to and returns the url.
  • curl -F'shorten=' Shortens the given URL.
    • Just visit for more information.
  • curl --upload-file ./hello.txt Uploads specified file to and returns the url.
    • This service is more sophisticated, you can set some constraints to your files and stuff. Visit for more examples with curl.

Cheat sheets

  • curl Shows a simple cheatsheet for specified command (in this case tar)
  • curl Same thing with above but this uses tldr. But there are some problems:
    • common / tar .md

    The first bold part may be one of these: common, linux. The second bold part is the command itself. If the command is linux-spesific, its under the linux folder obviously and most of the other things goes to common. You can create a small script that takes command as input and checks the folders one by one and returns if it finds an existing page. This is left as an exercise for the reader. (or you may just simply install a client, visit tldr).


curl -s -A "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:56.0) Gecko/20100101 Firefox/56.0" "" --data-urlencode "q=WORD_OR_SENTENCE" | grep -Po '<div dir="ltr" class="t0">\K[^<]*'
  • Change FROM to source language code, for example en for English.
  • Change TO to destination language code, for example tr for Turkish.
  • Change WORD_OR_SENTENCE to anything you want. You can use spaces.
  • Wrap this to a bash script and enjoy easy translations.

This example demonstrates how you can get the relevant information from an ordinary website. Always use the mobile versions if available because it is easier to parse them.

Cryptocurrency rates

  • curl Shows the cryptocurrency rates.
    • Run curl for more information about usage.


  • curl Turns given string/url into an ASCII art QR code.


If you are using a service that supports WebDAV, you can use these simple curl commands to download/upload files to your service. You can also do more sophisticated things with curl but if you need more than just downloading/uploading files then it’s better to use a client dedicated for that service.
  • Downloading:
    • curl -u LOGIN:PASSWORD --output FILE
    • Downloads the server_dav://REMOTE_FILE to FILE
  • Uploading:
    • curl -u LOGIN:PASSWORD -T FILE
    • Uploads FILE to server_dav://REMOTE_FILE

It’s better not to write your password while using these commands. If you remove the password part it will just simply show you a password prompt when you execute these commands which better than exposing your password to bash history.

Convert Documents

I’ll just leave a link here: You can convert nearly any format to any other one using this service. It has a nice and clear API. The website provides curl command examples.

Automatize your logins with gnome-keyring (and optionally with KeePassXC)

Storing passwords in plain-text is not an encouraged act but typing your password every time you start an application is also cumbersome. To solve this dilemma, the easiest solution I came up with is using gnome-keyring to store my passwords. I’m not using gnome either but gnome-keyring does not have much dependencies and a lot of applications already requires it. So I believe gnome-keyring is a good choice. The thing I want to achieve is something like this:
  • Store my passwords in gnome-keyring so that they are encrypted.
  • When I login to my computer, gnome-keyring automatically gets unlocked so that programs can get required passwords without hassling with me.

But there is a problem in this particular solution, at least for me. I’m using KeePassXC to manage my passwords, so copying all those passwords-or just the required ones, still a lot- to gnome-keyring is not feasible. So I need to do something about that too.

Installing/configuring gnome-keyring

Skip this step if you already have a running gnome-keyring.
  • Just install these packages: gnome-keyring, libsecret and seahorse.
  • You need to create a keyring named login so that when you login, that particular keyring gets unlocked. To create that, open seahorse and follow File -> New -> Password Keyring. Name it as login and as password enter your login password. This method works with login managers generally, if you are not using one, you need to figure it out. But getting gnome-keyring unlocked at login is not a big deal, if its locked, the first time a program requests for a password, gnome-keyring will show a prompt and ask for your password to unlock that keyring. Subsequent password requests will go silently because you have unlocked that keyring.

Adding passwords to gnome-keyring

We need to create a Stored Password in login keyring that we’ve just created. But the problem is it is not possible to create Stored Passwords with attributes in seahorse, we need to attach attributes to passwords because the command-line tool secret-tool requires them while querying for a password. So what you need to do is, simply create your Stored Password using secret-tool:
secret-tool store --label=Mail name mail_password

Then it will ask for the password. name and mailpassword are key-value pairs. You can add more attributes like them or change them as you wish. Now you can see the added password in seahorse. (You may wonder why we did not specify keyring name while adding password. Because this command adds your password to your default keyring, which is the login keyring. If it’s not the default one, right-click on it in seahorse and set as default.)

If you are using KeePassXC like me, my advise would be instead of duplicating your passwords in gnome-keyring, only add your keepass password in gnome-keyring: secret-tool store --label=KeePass name keepass_password I’ll get to the usage later.

Querying for a password

So you have your passwords in gnome-keyring and you want to supply that passwords to some program. Of course every program has different method for storing/getting your password. I’m going to use mutt as an example (it’s a command-line mail client). But first, lets see how do we get our password:
secret-tool lookup name mail_password

This command will print your password. To configure mutt to use gnome-keyring, simply add this line to your muttrc:

set imap_pass=`secret-tool lookup name mail_password`


To get a password from KeePassXC, use this command:
secret-tool lookup name keepass | keepassxc-cli show /path/to/keepass/db/file "/path/to/password/entry"

But this prints a lot of information. To just get the value of Password entry, use something like this:

secret-tool lookup name keepass | keepassxc-cli show /path/to/keepass/db/file "/path/to/password/entry" | grep "Password: " | head -n 1 | cut -c 11-

To see your database structure, use this command:

secret-tool lookup name keepass | keepassxc-cli ls /path/to/keepass/db/file

This will only list top level entries and directories, you can add, for example, ”email” to this command and it will print out entries under //email folder.

For your muttrc, you need to add this:

set imap_pass=`secret-tool lookup name keepass | keepassxc-cli show /path/to/keepass/db/file "/path/to/password/entry" | grep "Password: " | head -n 1 | cut -c 11-`

Security concerns

You may say that this kind of approach exposes all of our passwords to all user-level programs. Actually this is kind of behavior I’m trying to achieve here, so that I don’t need to type my passwords for each program. If you have a malicious program in your system, it will eventually get your passwords anyway. But gnome-keyring gives you a lot of flexibility. You can lock your keyring after your programs logged in or you can keep your keyring locked all the time(in that case, every time a program tries to use your password, gnome-keyring will ask for your user password. So you will just use one password for your every login which is also better than typing different passwords to different programs every time) etc. This is a much better solution than keeping your passwords as plain-text in your configuration files or typing them manually every time.

Also you can probably do the same things with kwallet if you are using KDE. Just search for equivalent commands for kwallet.

Emacs - Run flycheck on all buffers after save

To just see the working solution, scroll down to The Result.

Flycheck only runs on current buffer. If you make a change in a file that effects another file, buffer of the second file will not get notified and thus flycheck is not going to run on that buffer. So what we need to do is add an after save hook which runs flycheck on other buffers, but only on file buffers. We don’t want to run flycheck on temporary buffers or so. It seems simple but it took some time for me to get there, because I know too little about elisp.

First, we need a function that runs flycheck on given buffer. There is a function called flycheck-buffer but it only checks current buffer. But it turns out this is how elisp functions generally work and there is a way to get around that. Using with-current-buffer buffer function we can run any function on given buffer. with-current-buffer changes current buffer to given buffer, runs the function and restores current buffer to old one. So:

(defun flycheck-buffer* (buffer)
  "Runs flycheck on given BUFFER."
  (with-current-buffer buffer

Another thing we need is that a function that returns all buffers. It’s buffer-list. We need to remove temporary buffers and the current buffer from that list. Here it goes:

(defun other-file-buffer-list nil
  "Returns the list of all file buffers except currently open one and temporary buffers and stuff."
  (delq (current-buffer)
    (remove-if-not 'buffer-file-name (buffer-list))))

And the last function we need is this:

(defun flycheck-all-file-buffers nil
    "Simply run flycheck on all file buffers."
    (mapc 'flycheck-buffer* (other-file-buffer-list)))

Lastly, we need to add this function to after-save-hook. But I want to be a able to disable/enable this feature whenever I want. Because if you have a lot of buffers open, this feature may cause some laggyness on save events.

(defun enable-flycheck-all-file-buffers-on-save nil
  (add-hook 'after-save-hook 'flycheck-all-file-buffers))

(defun disable-flycheck-all-file-buffers-on-save nil
  (remove-hook 'after-save-hook 'flycheck-all-file-buffers))

The Result

Run M-x then call enable-flycheck-all-file-buffers-on-save. From now on, when you save a file, other files will be flychecked too. To disable it, call disable-flycheck-all-file-buffers-on-save.
(defun flycheck-buffer* (buffer)
  "Runs flycheck on given BUFFER."
  (with-current-buffer buffer

(defun other-file-buffer-list nil
  "Returns the list of all file buffers except currently open one and temporary buffers and stuff."
  (delq (current-buffer)
    (remove-if-not 'buffer-file-name (buffer-list))))

(defun flycheck-all-file-buffers nil
    "Simply run flycheck on all file buffers."
    (mapc 'flycheck-buffer* (other-file-buffer-list)))

(defun enable-flycheck-all-file-buffers-on-save nil
  (add-hook 'after-save-hook 'flycheck-all-file-buffers))

(defun disable-flycheck-all-file-buffers-on-save nil
  (remove-hook 'after-save-hook 'flycheck-all-file-buffers))

Bash scripting guide

I’ve been writing some bash scripts lately and I’ve learned a lot. I must say that it’s really fun to write bash scripts, every line of code feels hacky and no matter what I wrote, it felt bad which is kind of liberating. I found my real self in bash scripts. Here are some of the things that I find useful or/and important.

I’ll be talking about bash specifically, but I lot of the features in here are implemented in very similar ways in other shells.


The most portable shebang for bash scripting is: #!/usr/local/env bash. It basically asks env to find bash and wherever it may be, run this script with it. Do not use sh, it may be linked to bash but most of the time this is not the case.

shebangs also let’s you do some cool tricks:

Running scripts with sudo

If you need to run some commands with root privileges in your script, it is generally advised to run your script using sudo instead of having a sodo command ... kind of line in the script. So to write such script, you need to check if you have root privileges or not. Instead of that, you can have this kind of shebang:
#!/bin/sudo /bin/bash

Now your script is guaranteed to be running with sudo, kind of. As I said using #!/usr/local/env to find the binary you want is the most reliable way of doing it. With this shebang, we got this problems: sudo or/and bash might not be in /bin directory. You might have tempted to do this then:

#!/usr/bin/env sudo bash

Which seems reasonable. We ask env to find sudo and we are calling it with bash argument and due to nature of shebangs, the script’s path added to the end. So the final call that is produced by the shebang will be this:

/path/to/sudo bash /path/to/your/script

But unfortunately, this is not the case. Because env parses all arguments as a whole, it looks for an executable named sudo bash in your $PATH. But that is also easy to fix, just use -S option of env to be able to pass arguments in shebang lines:

#!/usr/bin/env -S sudo bash

I’m not entirely sure about this style of sudo calls. There may be implications that I’m missing but it worked out well for me.

Running other programs with shebangs

This is not entirely related to bash scripting but it’s worth mentioning. Check this out:
#!/usr/bin/env -S cat ${HOME}/.bashrc

This script directly calls cat with ${HOME}/.bashrc argument. Instead of using bash to call cat program, we got rid of one level of indirection. (using ${HOME} instead of $HOME is just an env restriction). This may seem silly, but I’m sure it has it’s own use-cases.


Here are some basic tips that makes your code faster and easy to reason.

true and false

  • true and false are actual binaries that does nothing and returns 0 and 1 respectively as their exit code. If you pass a command to if clause, it checks the exit code of it and depending on that selects the right branch. So 0 exit code which means successful exit is considered as true and everything else is considered as false.
if true; then echo "hey, it's true!"; fi

# They are also helpful in context of functions:
function starts_with {
    case "$1" in
        "$2"*) true ;;
        *) false ;;

# prints yes
if starts_with "something" "some"; then echo "yes!"; else echo "no :("; fi
  • But I should mention that true and false does not stop the function from flowing. In bash, last command call’s exit code is returned as function’s exit code. To stop the function and return true, just use return. return halts the function and returns 0 as the exit code. We can revise the function from above in that style:
function starts_with {
    case "$1" in
        "$2"*) return ;;

  • To exit early with a false value, just use return something-not-zero, like return 255.

[[ ]] and (( )) instead of [ ]

  • [ is an actual binary. So it costs more to use it. [[ is a bash built-in and has a lot of improvements over [.
  • (( is like [[ but for arithmetic expressions only. You can compare variables and make some calculations within them directly.
echo "Enter a year:"
read year

if [[ -z $year ]]; then
    echo "Year cannot be empty."
elif (( ($year % 400) == 0 )) || (( ($year % 4 == 0) && ($year % 100 != 0) ))
    echo "A leap year!"
    echo "Not a leap year :("

let instead of (( ))

Another somewhat nicer alternative to (( )) is let. It’s not an alternative for using inside if clauses but for assignments it requires less typing:
let l=33+9


declare and it’s friends

declare is pretty useful built-in function. I’ll go over some of it’s capabilities and my take on usage but you can type help declare and see a very informative and short text about it.
  • Using declare inside a function makes the variable local, meaning they do not interfere with global variables. A better alternative is just using local built-in which is more clear. If your intention is exact opposite, meaning you want to declare a global variable, use -g option with declare. (Actually just assigning something to a variable without declare=/=local keywords make them global. So you don’t need something like this: declare -g a=3 inside a function to make it global, a=3 is enough. -g comes handy if you are using other options of declare and wanting to make the variable global)

function greet {
    local greeting="hi"

    echo "Your name:"
    read name

    echo "Local greeting:"
    echo "$greeting $name"

echo "Global greeting:"
echo "$greeting $name"
  • As you may have noticed, name becomes a global variable. If you want to keep it in the scope of the function, add this line before read name: local name.
  • Also you can use the options that declare takes with local. (Yeah it’s possible to do some stupid thing like: local -g)
  • To declare a read-only variable, you can use declare -r or better, readonly.
  • To export variables into environment you can use declare -x or better, export

String manipulation

Here is a quick summary of string manipulation capabilities of bash: (Assume string is a pre-defined variable)
  • ${#string} → returns the length of $string.
  • ${string:4} → returns the substring starting at fourth character of $string.
  • ${string:4:3} → returns the substring of length of three starting at fourth character of $string.
  • ${string#asd} → Removes asd from beginning of $string (if it starts with asd).
  • ${string##asd} → Same as above. The difference becomes apparent between # and ## when you start using some globing operators. While # removes shortest match, ## removes the longest match. Check this:
x=${string#a*c}  # x is abcdefg
y=${string##a*c} # y is defg
  • ${string%asd} → Removes asd from back of $string.
  • ${string%%asd} → Same as above, but like in the case of # and ##, % removes shortest match, %% removes longest match.
  • ${string/asd/123} → Replaces first match of asd with 123.
  • ${string//asd/123} → Replaces all matches of asd with 123. Again you can use globing characters here.
  • ${string/#asd/123} → Replace asd if it’s in front of the string with 123.
  • ${string/%asd/123} → Replace asd if it’s at the end of $string with 123.

Also there is stuff for case manipulation. Given variable ~EXAMPLE=”An ExaMplE”~, observe these:

  • ${EXAMPLE^}An ExaMplE
  • ${EXAMPLE,}an ExaMplE
  • ${EXAMPLE,,}an example
  • ${EXAMPLE~}An ExaMplE
  • ${EXAMPLE~~}AN eXAmPLe

Here is a more complete reference with more examples.

Regular expression matching

You can use ==~= operator to perform a regular expression match instead of simple globing:
# Check if input is hexadecimal:
if [[ $input =~ ^[[:xdigit:]]*$ ]]; then
    # do stuff with it

Default vaules

You can use ${VAR:-DEFAULT} or ${VAR-DEFAULT} syntax to define default variables. The first one outputs DEFAULT if the $VAR is empty or unset. Latter only outputs DEFAULT when $VAR is unset. A practical example of this would be:
echo "Your config directory is: ${XDG_CONFIG_HOME:-$HOME/.config}"

There is also a version of this which uses === instead of -. The difference is that it also sets the variable to default value so that you can use the variable afterwards without defining a default value everytime.



You can access to parameters using positional parameters: $1, $2 ... $9, ${10}, ${11} .... shift, as the name suggests, shifts those parameters. So when you call shift, $2 becomes $1, $3 becomes $2… It becomes handy in loops or sometimes you just want to process first N parameters and leave rest as is while passing them to another program.
# Removes given files if they are empty

while (( "$#" )); do
    if [[ -s $1 ]]; then
        echo "Can't remove."
        rm $1


shift also can be called with a number argument, like shift 3 which shifts parameters 3 times.


Say that we have a wrapper script/function that checks if ripgrep (rg) is installed and executes it with given parameters otherwise it calls grep with given parameters:
rg_path=$(which rg)
 if [ -x "$rg_path" ]; then
    rg "$@"
    grep "$@"
  • =”$@”= is equivalent of doing =”$1” “$2” “$3” …=. And it’s the only thing that does that.
  • =”$*”= concatenates parameters using IFS as separator. (If IFS is empty, which is the case in this script, it simply uses space as separator.)
  • To learn more about special parameters, check this.

Looping through arguments

It’s a pretty common task with pretty easy syntax:
for arg in "$@"; do
    echo "$arg"

Or better yet:

for arg; do
    echo "$arg"


The most common problem of using subshells is that subshells can not effect the parent shell’s variables. For example:
echo "stuff" | read some_var

In this example, usage of | introduces a subshell and the some_var is defined in this subshell. Then that subshell is vanished when the execution of the line is over. So that you can not use some_var in rest of the script. There are a few ways to get around this issue. Most simple one being:

echo "stuff" | {
    read some_var
    echo "I can use $some_var"

Here | still introduces a subshell but we continue to do our stuff in that subshell. But still you can’t communicate with the parent shell, after the { ... } is over some_var is not available for use. At this point you have two solutions: here strings and process substitutions.

Here strings

Continuing the example above, we can do something like this:
read some_var <<< "stuff"
# or
read some_var <<< $(echo "stuff")

<<< redirects the string to stdin of the command. So that we didn’t create a subshell and we can use some_var from now on in our script.

Process substitution

A process substitution creates a temporary file with the given output and passes that temporary file to a command. For example:
read some_var < <(echo "stuff")

Here, the effect is same as with here strings but what happens is a lot different. As you may already know < redirects given file to stdin of the command before it. <(...) simply creates a temporary file containing ... and replaces itself with the path to that temporary file. To simplify, you can think that the command becomes: read some_var < /dev/fd/some_number after evaluating <(echo "stuff") part (/dev/fd/... is the path where temp file is created, and it contains stuff). Now < simply redirects the contents of the file to read some_var command.


Functions that accepts both arguments and stdin

Let’s say that you want your function to accept data either as argument or from stdin. You can simply combine ${VAR:-DEFAULT} syntax with redirecting operator and you will have this:

Now your function will concatenate your arguments and set it to str or if there are no arguments it’ll read stdin and set it to str.

Linting bash scripts

It’s really hard to spot errors in your bash scripts because it’s dynamic nature and when an error occurs bash doesn’t really care about it and gives you as little information as possible. A great tool, called shellcheck addresses this shortcomings of bash. It’s a great bash linter, that detects a lot of the common mistakes. It gives you nice advices that makes your code more portable/readable/safe. Just use it. (For Arch Linux users that do not want to install bunch of haskell-* packages as dependencies, there is also shellcheck-static package in aur, I recommend using that. For vim users I recommend using ALE extension, it works out of the box with shellcheck.) For emacs users, Flycheck works out of the box with shellcheck.

Running SQL on org-mode tables

I was tracking some sleep related information about myself using org tables and I wanted to visualize them. I thought to myself, I know R! Let’s do all that stuff in R!. Oh boy, I was wrong. I used R in the past for an undergraduate course and I wasn’t heavily invested in taking notes at those times. (Now thanks to org-mode and zotero, I don’t forget anything anymore) I quickly gave up using R for manipulating the data but I was going to use it for plotting anyway. At that point I was about to give up, firstly because I didn’t want to have an overly-complex solution for such a worthless thing and secondly I was extremely lazy.

Then I remembered about sqldf. It’s an R package that manipulates R dataframes (basically tables, at least for our purposes in this post) using SQL. Behind the scenes it uses an SQL DB implementation for this. It handles all the dirty stuff for us; like creating tables, running the SQL and conversion between the formats. So I simply used sqldf and R’s plot function to accomplish my goal (Yeah, ob-R package supports passing org tables to R code as variables). Then I thought it may be really nice to have an SQL backend for manipulating org tables. Because why not? Nearly every table-like technology have some kind of SQL-like query language.



You need to install R and sqldf package.
pacman -S r # use your package manager for installing R, this is just an example for Arch

Now you need to install sqldf. But before that I recommend adding something like this to your environment variables (probably using ~/.profile file, you know what’s best), otherwise you will need root privileges to install R packages.

export R_LIBS_USER="$HOME/.rlibs"

You also need to create that directory:

mkdir ~/.rlibs
# BTW, run this too while you are here:
echo 'options(repos = c(CRAN = ""))' > ~/.Rprofile

Now open the R console.


And run this:


That’s all for the R part.


Enable running R code.
 '((R . t)))

This is optional but for R syntax highlighting and stuff you may want to install ess package. I recommend installing it with use-package:

(use-package ess :ensure t)

Running SQL on org tables

Now you can simply do this:
#+tblname: tbltest
| col_a | col_b |
|     1 |     2 |
|     1 |     4 |
|     1 |     6 |
|     2 |     7 |
|     2 |     8 |
|     2 |     9 |

#+begin_src R :colnames yes :var tbltest=tbltest
sqldf("SELECT col_a, AVG(col_b) FROM tbltest GROUP BY col_a")

And as the result, you get this:

| col_a | AVG(col_b) |
|     1 |          4 |
|     2 |          8 |

Nice! But we don’t have SQL syntax highlighting. We can get over it by doing something like this:

#+name: tbltest-sql
#+begin_src sql
SELECT col_a, AVG(col_b) FROM tbltest GROUP BY col_a

#+begin_src R :noweb yes :var tbltest=tbltest

Now we have a nice syntax highlighting for our SQL. But for this you need to have at least 2 different code blocks every time.

Using SQL instead of table formulas

I found some obscure ways of doing this but here I present the most sane one:

Firstly you need to have a named src block that calls sqldf with given SQL code, somewhere in your org file. Putting it under some section with :noexport: tag might be good idea if you are willing to export the document:

#+name: table-sql
#+begin_src R :var sql="" :colnames yes
#+tblname: sometbl
#+RESULTS: sometbl
| col_a | col_b | col_sum |
|     1 |     2 |       3 |
|     1 |     4 |       5 |
|     1 |     6 |       7 |
|     2 |     7 |       9 |
|     2 |     8 |      10 |
|     2 |     9 |      11 |
#+NAME: sometbl
#+CALL: table-sql[:var sometbl=sometbl](sql="SELECT col_a, col_b, (col_a + col_b) as col_sum FROM sometbl")

When you C-c C-c on the #+CALL line, the table will be replaced with the result of given SQL.

I believe things can be simplified with a little bit of elisp but it may not worth the effort, this seems already an OK solution for me.

UPDATE: Here is an interesting package, called orgaggregate, which covers most of the use cases presented here and much more but without any external dependencies and does everything with a sane syntax. Check it out!

Better keyboard experience in Linux

In this post, I’ll try to describe a more healthy and productive way of using keyboard in GNU/Linux, particularly under My main goal is not to impose a certain way of using keyboard but to introduce some concepts and some very useful tools that you can build your workflow upon.

The case against the mouse

(This part is mostly just me rambling, feel free to skip it)

First of all, I’m a big believer of a keyboard-oriented workflow. Sometimes it costs more time to use the keyboard but it helps me to stay sane. Mouse generally requires a certain level of consciousness, like you need to aim for stuff, try to be precise while selecting something, etc. The content you are dealing with the mouse is not static, so you need to do some calculation every time to get the desired action with the mouse. But with the keyboard, you can just mindlessly press your 4-key shortcut and get a magic happening. After a certain point, even your most complex shortcuts become a reflexive response.

There are use cases for mouse too, of course! Mindlessly scrolling down a website is always better done with a mouse on your lap. Some jobs may be better suited for a drag-drop focused workflow and I get them. What I try to minimize is that when you are doing a keyboard-focused work and you need mouse time to time. That is just a distraction and a cause of wrist pain. Other than that, trying to eliminate mouse is pointless.

Modifying the keymap

To get the most out of your keyboard, we need to create a specialized keymap for ourselves. For doing that I’ll be using xmodmap. xmodmap is a simple utility tool for modifying your keymaps. The configuration is generally done through ~/.Xmodmap file.

Selecting the proper base keymap

I simply recommend using us(intl) keymap as our base keymap. Because this keymap enables us to use AltGr key which will become super beneficial later in this post. To set your keymap to us(intl), do this:
localectl set-x11-keymap 'us(intl)'

You need to restart your X session to get it working or you can simply do this:

setxkbmap 'us(intl)'

Fixing some problems with the us(intl)

While it enables AltGr key, it also turns backtick and apostrope keys into modifier keys that creates accented versions of pressed key. I do not want this behavior, to get the normal behavior add these into your ~/.Xmodemap.
keysym dead_grave = grave asciitilde
keysym dead_acute = apostrophe quotedbl

Empowering the [, ] keys

When you press Shift + [ you get {. As a natural extension to that, I bind AltGr+[ to (. This is simply easier than doing Shift+9, considering parentheses used frequently while coding, this change is a nice touch. Put these into your ~/.Xmodmap:
!! AltGr+[ → (, AltGr+] → )
keysym bracketleft = bracketleft braceleft bracketleft braceleft parenleft
keysym bracketright = bracketright braceright bracketright braceright parenright

More UTF-8 chars

Most of the modern programming languages supports using UTF-8 glyphs. For example you can use instead of -> or instead of >=. They are more expressive, better-looking and feels right. Also while preparing a document or while having a causal conversation, it’s just nicer to utilize these characters. Here is the related part of mine ~/.Xmodmap:
!! Quick access for some unicode chars
!! altgr + b → λ  | altgr + a → →
!! altgr + n → ¬  | altgr + d → ⇒
!! altgr + , → ≤  | altgr + . → ≥
!! altgr + = → ≠  | altgr + shift + = → ≔
!! altgr + / → ÷  | altgr + ; → ∷
!! altgr + 8 → ×  | altgr + t -> ✓
!! altgr + x → ❌ | altgr + f → ∀

keysym b = b B b B U03BB
keysym a = a A a A U2192
keysym x = x X x X U274C
keysym f = f F f F U2200
keysym n = n N n N U00AC
keysym d = d D d D U21D2
keysym t = t T t T U2713
keysym 8 = 8 asterisk 8 asterisk multiply
keysym comma = comma less comma less U2264
keysym period = period greater period greater U2265
keysym equal = equal plus equal plus U2260 U2254
keysym question = slash question slash question division
keysym semicolon = semicolon colon semicolon colon U2237

A new modifier key, Hyper

CapsLock, at least for me, one of the most useless key on the keyboard. Actually it’s kinda more useful, when you compare it with the RightCtrl, at least you can press it. But the functionality is not really required, do you really find yourself typing in all caps for long periods of time? Even if so, you can simply write them all in lowercase and convert them to upper case with the help of your favorite text editor. What I like to do is, remap the CapsLock key to a new modifier key, namely Hyper, which enables you to create new shortcuts. You can think Hyper like the Control key but no program uses it and you are free to map anything you want to. Here is the relevant ~/.Xmodmap configuration:
!! Unmap capslock
clear Lock
keycode 66 = Hyper_L
!! Leave mod4 as windows key _only_
remove mod4 = Hyper_L
!! Set mod3 to capslock
add mod3 = Hyper_L

Now we will be able to create shortcuts using this Hyper key. I’ll come to this later in this post.

Another thing is that some people like to do is that using CapsLock as ESC and I’m also into that, but I don’t want to sacrifice my Hyper key too. For this there is a solution, which involves using another simple tool where you use CapsLock key as the Hyper key when combined with the other keys but when it’s pressed alone it acts as the ESC key. I’ll come to this later in this post too.


I don’t know if anybody uses this key unironically but the only use case I found for it was using it as the ESC key. On my older keyboard I was able to press RightCtrl with my palm and as the ESC key it served me quite well. But it’s harder to press RightCtrl with my palm on my new keyboard so I just do not use it anymore. I’m simply using the CapsLock as the ESC as I described above. But here is the configuration for using RightCtrl as the ESC if you want to give it a shot:
keycode 105 = Escape

Global directional keys

I do not like to leave the home row of my keyboard, it’s just hard to reach for the arrow keys for example. Also when you get used to h,j,k,l keys in vim for directional movement, you just want them everywhere. So I simply remapped AltGr + {h,j,k,l} to {Left, Down, Up, Right} keys respectively. When you press AltGr + j it acts like Down key, anywhere in your system. You do not need configuration per program, you just need to have this in your ~/.Xmodmap:
keysym h = h H h H Left Home
keysym j = j J j J Down Prior
keysym k = k K k K Up Next
keysym l = l L l L Right End

This configuration also binds AltGr + Shift + {h,j,k,l} to Home, Prior, Next, End keys. I have a little issue with this combination though, when you do a AltGr + Shift + h it gets registered as Shift + Home. This makes some programs select the text till the beginning of the line from where your cursor is, but for some programs it does not do that. The programs I use mostly behave in way that I want.

Side note for Emacs users: I generally do not use these bindings in Emacs to make a movement but sometimes I do use them and Emacs does a selection when I press them. You can disable shift selection to get the desired result:

(setq shift-select-mode nil)

More with AltGr

As you may have inferred, to create a combination involving AltGr you need to change fifth field of the keysym assignment.
!! AltGr + j → Down
!! I'm not quite sure what the second j J part does but I accepted that as it is
keysym j = j J j J Down

You can use AltGr to create accented characters, this might be a nice alternative for constantly switching between your native keyboard layout and us(intl). If you find any other use cases for this key, let me know! The nice part of utilizing this key is that, like the Alt key, you use your thumb for pressing it and your thumb is the most powerful finger on your hand. So it makes sense to embrace keys like Alt, AltGr.

Shortcuts, key-bindings

There are tons of programs that can handle this but my personal favorite is sxhkd. It’s DE/WM agnostic, the configuration is pretty simple and intuitive. It also supports key chording, which is just fantastic.

I use my super (windows) key for the WM related shortcuts; like super + {h,j,k,l} for switching the focused window, super + {comma, period} for focusing next/prev monitor, super + w for closing the current window etc. Observe the following configuration to get a taste of sxhkd:

# Focus the next/previous desktop
super + {n,p}
    bspc desktop --focus {next,prev}.local

# audio/mic toggle
    amixer set {Master,Capture} toggle

I use hyper key to manage all the programs I have, or to run stuff. hyper + p does a play/pause, hyper + c opens a calendar in a popup-like window, hyper + t opens a popup for translation etc. These things take a lot of keys, but I also want some shortcuts for opening programs. I can always do hyper + a and search for the specific program that I want to open by typing it’s name but that’s time consuming. A simple binding would be better but we already exhausted all the keys on the keyboard. This is where chord chains comes right in:

# Run stuff
hyper + r; {f, e, r, t, v, k, q}
    {firefox, emacsclient -c, jaro ~, lxtask, vivaldi-stable, keepassxc, qbittorrent}

When I do hyper + r followed by f, Firefox opens. Simple as that. This gives you whole new set of bindings. Multiple keys are also supported, for example, I have this in my configuration:

hyper + r; p; {s, p, w}
    sxiv {~/Pictures/screenshots/, ~/Pictures/phone/Camera/, ~/Pictures/wallpapers/}

Automating stuff through shortcuts is nice, especially if the program offers a nice set of command-line options. Sometimes programs does not offer a command-line interface but they offer a DBUS API that you can utilize, it’s nice to keep this in mind while creating your bindings.

Various tools/configurations

hyper as ESC

As I mentioned above, I use hyper as a modifier key when used in combination with some other key. But when I press it by itself, it acts as ESC key. This is achieved through using a simple program called xcape. I start xcape with the arguments below and it gives me this functionality:
xcape -e 'Hyper_L=Escape'

The purpose of xcape is to make a modifier key to be used as another key when it is pressed and released on its own. So in this case, we simply say to xcape that make hyper act as ESC when it’s pressed and released by its own. The thing is that, you may experience a slight delay, because ESC is registered right after you release your hyper key.

You can also use shift or ctrl (or any modifier) keys as ESC or any other key when they pressed and released on their own.


xev is a small utility program that may help you during the configuration phase. It simply shows X events, you can press keys or key combinations to get their key codes, key symbols etc.

Things to consider

I try to create one-key bindings whenever I can. While this is not really possible on system level, it’s quite possible in programs like Vim or Emacs. If I’m going to create a new binding that requires at least two keys (one being modifier key), I try to use alt key as the modifier first. I only use ctrl if I absolutely need to do that. Thumbs are very strong while pinkies get stressed pretty easily. One can argue based on this assumption that assigning CapsLock as ESC might be bad for my left pinky. I think this is a non issue because real stress happens when doing a key combination, simply hitting a key with my pinky does not generate much stress.


I am always looking for ways to enhance my keyboard usage. I’m not a very-fast typist, at my best I can write ~70 WPM with high concentration (and for a short period of time). But the things I explained above are not for typing fast, they are for using your computer easier. Especially for programming. If you have more keyboard related tricks or better use cases for the programs I mentioned above, please share them with me!

How do I keep my days organized with org-mode and Emacs


I’ve been using org-mode to organize my life for quite a long time; all the deadlines, recurring events, any kind of plans, projects etc. lives in several org files. Most beneficial part of this approach is that, things grow. When you write something down and have an easy way to access that piece, you’ll start expanding it. You also start to notice patterns which eventually leads you to do some optimization with that particular thing. Also there some other minor benefits like not forgetting important stuff and seeing your life in a more structured manner. These are all great but you need to find a balance between planning/note-taking and doing actual stuff, otherwise it’ll just overwhelm you and impact everything negatively. I’m not saying that I figured it out all but at least for this particular piece, organizing your day in an org file, I have some nice ideas.

General structure

I have a file called where I keep all these day management stuff[fn:: All of my daily notes are in this file, I don’t like creating a new file for each day for various reasons. Moving tasks between them becomes problematic, jumping to an earlier day is problematic and if I want to, I can always do org-narrow-to-subtree and that’s it, it’s like an individual file now.]. The name comes from bullet journaling thing and I’m not sure if this can be called as bullet journaling but the name gives a little bit more personality to the file and I like that.

Starting the day

Every new day, I create a new level 1 header with today’s date and start planning the day. I have a snippet, called daily, that expands to a big checklist of my daily routines[fn:: The reason for not using a static header for these kind of habits/routines with a recurring timestamp is that it does not give you enough flexibility. Sometimes I skip breakfast and sometimes I do it in a very different time than usual. This way I have full control over the day with some starting points. I also like to see the all of my daily items under the same header instead of resorting to the agenda.]. So far I got this:


I use clocking functionality to keep track of how much time I spent on doing stuff. So I just start clocking on the Daily planning TODO item. I’ll show how I utilize this later in the post.

The screenshot does not reflect all the things I had in my daily snippet but it’s rough estimate. As you can see, I left little clues for myself in the Daily Planning header and all the other headers have some predefined directions for me. For instance, Breakfast header reminds me to take my daily supplements (so that when I do my breakfast and open to mark it as done I just remember my pills), Housework has some predefined works that I need to do, like washing dishes etc, Reading has a link to my reading list (which is just another org file) and to my current readings so that I can recap what I was working on. Little conveniences like that. I just remove what’s not related to this day while I’m going through the headers, like today’s a weekend and I do not need to worry about work stuff so I just remove those check boxes.

One minor anecdote about these notes: Since I added the Plan the dinner part to Daily planning header I started cooking at home even more regularly. Before that, I was thinking about the dinner once I get hungry and because of multiple reasons (like being hungry at that moment, not having enough ingredient to do something) I would just go ahead and order something. Cooking is pretty disciplined activity and planning makes it easy.

As my habits/routines change, this daily snippet also changes. Important thing is you should be able to do this relatively easily. I have some shortcuts to quickly access my snippets and edit them. Just to note, I use yasnippet for the snippets.

The reason for having all of these routines under a header called Routines instead of expanding them right under the current days header is to keep my daily view clean and uncluttered. See next screenshot.

So when I finish planning, I just clock out:


And this is what the whole day looks like after planning:


Right now, it’s weekend and there is nothing related to work here. When there is work items, I just tag them with work tag, so that I can do just hide/show work-related and non-work-related pretty easily. Other than this case, I don’t use tags much. Here is an example working day:


I tend to schedule things to specific hours, so that I can see them on my agenda view and when I sync this files to my phone orgzly sends me notifications before the scheduled time. I don’t really use orgzly anything other than this, except using it’s pretty widget on my main screen so that when I pull up my phone, the first thing I see is my TODO items.

Little conveniences

There some little things that I sprinkled through my Emacs configuration that makes this file a bit more accessible and pretty.


Pretty famous package that brings some fanciness to org documents.
(use-package org-bullets
  :ensure t
  :hook (org-mode . org-bullets-mode))


This brings some fanciness to priority indicators.
(use-package org-fancy-priorities
  :ensure t
  :hook (org-mode . org-fancy-priorities-mode)
  (setq org-fancy-priorities-list '("🅰" "🅱" "🅲" "🅳" "🅴")))


Well, I don’t make use of tags much but here you go, it replaces your tags with nice little UTF8 icons of your choice.
(use-package org-pretty-tags
  :diminish org-pretty-tags-mode
  :ensure t
  (setq org-pretty-tags-surrogate-strings
        '(("work"  . "")))


Fancy checkboxes

This does not need any external dependency, it’s possible to do it with prettify-symbols-mode:
(isamert/prettify-mode 'org-mode-hook
                       '(("[ ]" . "")
                         ("[X]" . "" )
                         ("[-]" . "" )))

;; Also here is `isamert/prettify-mode' macro.
;; You don't need this but it's a bit more convinient if you make use of
;; prettify-symbols minor mode a lot
(defmacro isamert/prettify-mode (mode pairs)
  "Prettify given PAIRS in given MODE. Just a simple wrapper around `prettify-symbols-mode`"
  `(add-hook ,mode (lambda ()
                     (mapc (lambda (pair)
                             (push pair prettify-symbols-alist))

Quickly accessing the file

It’s quite important to be able to easily open and take notes into this file. Thus, I created a shortcut that toggles this file on right side of Emacs. See the following:


The code is a bit long but the important function is isamert/toggle-side-bullet-org-buffer. I assigned a keybinding to this and it simply toggles the file in a side buffer.

(defun isamert/toggle-side-bullet-org-buffer ()
  "Toggle `` in a side buffer for quick note taking.  The buffer is opened in side window so it can't be accidentaly removed."
  (isamert/toggle-side-buffer-with-file "~/Documents/notes/"))

(defun isamert/buffer-visible-p (buffer)
 "Check if given BUFFER is visible or not.  BUFFER is a string representing the buffer name."
  (or (eq buffer (window-buffer (selected-window))) (get-buffer-window buffer)))

(defun isamert/display-buffer-in-side-window (buffer)
  "Just like `display-buffer-in-side-window' but only takes a BUFFER and rest of the parameters are for my taste."
    (list (cons 'side 'right)
          (cons 'slot 0)
          (cons 'window-width 84)
          (cons 'window-parameters (list (cons 'no-delete-other-windows t)
                                         (cons 'no-other-window nil)))))))

(defun isamert/remove-window-with-buffer (the-buffer-name)
  "Remove window containing given THE-BUFFER-NAME."
  (mapc (lambda (window)
          (when (string-equal (buffer-name (window-buffer window)) the-buffer-name)
            (delete-window window)))
        (window-list (selected-frame))))

(defun isamert/toggle-side-buffer-with-file (file-path)
  "Toggle FILE-PATH in a side buffer. The buffer is opened in side window so it can't be accidentaly removed."
  (let ((fname (file-name-nondirectory file-path)))
  (if (isamert/buffer-visible-p fname)
      (isamert/remove-window-with-buffer fname)
       (find-file file-path)

Throughout the day

Throughout the day, I just clock the work I’m doing. For work days, I take small notes about the thing I’m working on that moment. If that thing seems to be going to take more than 1-2 days I’ll just create a header for that thing in my and take my notes there, is only responsible for clocking and having a link to that header in for these kind of specific stuff. I also take my meeting notes here in

As you may have realized, there is another header called Notes in the screenshots above. This is for taking out of context notes during the day, like:

  • A clever/useful code snippet
  • A realization of something
  • A link to look at later
  • Anything else that I want to be interested later on

Having this header there and seeing it time to time also forces you to come up with some pretty useful notes for yourself. Sometimes I just see it and think: yeah, that was quite nice, I should take a note. I try to utilize org-mode features as much as I can while doing this. I create links to code files, put screenshots etc. And while we are there, here is a snippet that I use for quickly capturing screenshots into org documents:

  (defun isamert/org-attach-image-from-clipboard (file-path)
    "Save the image from clipboard to FILE-PATH and attach it into the document.
If FILE-PATH is empty or nil, then image is created under ~/.cache with a random name.
FILE-PATH is relative to the current documents directory."
    (interactive "sSave file to (leave empty to create a temp file): ")
    (let ((file (if (and file-path (not (string-empty-p file-path)))
                  (make-temp-file "~/.cache/org_temp_image_" nil ".png"))))
       ((locate-file "xclip" exec-path) (shell-command (format "xclip -selection clipboard -target image/png -out > %s" file)))
       ((locate-file "pngpaste" exec-path) (shell-command (format "pngpaste %s" file)))
       (t (message "Either install xclip (for linux) or pngpaste (for mac) to get this functionality.")))
      (insert (format "#+ATTR_ORG: :width 400\n[[file:%s]]" file))

Finishing the day

If you’ve seen in the above screenshots, there was another header called Summary. When the day ends, that is to say, before going to bed I open and create a Summary header. First thing I do is to get an overall view of what I’ve been doing the whole day. I do this by creating a clocktable with the following settings:
#+BEGIN: clocktable :scope tree1 :maxlevel 3 :block untilnow
:scope tree1
This takes the scope as the surrounding level 1 header, which corresponds to today’s header.
:maxlevel 3
Just to make things simple.
:block untilnow
Instead of using :block day, I use this. Because a day does not end when clock hits to 00:00 (technically yes, the day ends but for my perception the day ends when I go to sleep). So :block untilnow incorporates all the clockings under this days header (day as in my perception) and we are limited with the :scope tree1 so no other clockings from other days gets mixed up in our table. See:


After this, if I’m feeling well I just add a few observations about the day or maybe note down unexpected things that I encountered during the day. I also open my Notes header and try to create tasks based on those and simply move easy ones to where they belong (like moving snippets to my snippet file etc.).


Well, that’s it. I have other org files that I use along with and I utilize org-ql to connect things together or to find/filter them instead of relying on agenda but I guess that’s an another post. I’ve been using this exact methodology more than one year (earlier I had similar methods but they were quite different in terms of organization etc.) and I can say that it’s somewhat effective against my procrastination. I’m not saying that I don’t need willpower anymore but it’s easier to do things when you’re more organized and this file keeps me sane during the day.


Emacs’te Türkçe imla denetimi

  • Öncelikle sisteminize hunspell kurulumu yapın.
  • Şu adresten en son versiyonu indirin.
  • Dosyayı herhangi bir sıkıştırılmış dosya görüntüleyicisiyle açıp içerisinden dict altındaki tr-TR.dict ve tr-TR.aff dosyalarını /usr/share/myspell/dicts altına çıkarın.
    • Dosyala isimlerindeki -‘leri _ ile değiştirdim, sistemdeki diğer sözlüklerle uyumlu olması açısından. Eğer değiştirmediyseniz aşağıdaki kod parçalarını buna göre düzenleyin.
    • Çıkarılması gereken dizin sizin sisteminizde farklı olabilir, o nedenle man hunspell‘in en altındaki FILES kısmından dizini doğrulayın.

Gerekli Emacs konfigürasyonu ise şöyle:

(setq ispell-program-name "hunspell"
      ispell-local-dictionary "en_US"
      '(("en_US" "[[:alpha:]]" "[^[:alpha:]]" "[']" nil ("-d" "en_US") nil utf-8)
        ("tr_TR" "[[:alpha:]]" "[^[:alpha:]]" "[']" nil ("-d" "tr_TR") nil utf-8)))

;; org-mode ve markdown-mode içerisinde flyspell'i aktive et
(add-hook 'org-mode-hook 'flyspell-mode)
(add-hook 'markdown-mode-hook 'flyspell-mode)
  • Görüldüğü gibi öncelikli olarak İngilizce aktif durumda, isterseniz bunu tr_TR ile değiştirebilirsiniz.
  • Flyspell’in açık olduğu herhangi bir bufferda M-x ispell-change-dictonary yaparak o anki kullanılan sözlüğü değiştirebilirsiniz.
  • Yanlış bir kelimenin üzerinde M-x ispell-word yaparak önerileri görebilirsiniz. (evil kullanıcıları z= yapabilirler.)


Mükemmel mi? Hayır. Mesela şu an yazdığım bu yazıda görüntüleyiciyle kelimesinin altını çiziyor yanlış diye. Aynı zamanda yanlış kelimeler için yapılan öneriler de oldukça vasat. Fakat elbette birçok hatanın gözden kaçmaması için oldukça yardımcı oluyor.

A (relatively) deep dive into (zsh) shell login process (without a display manager)

Recently, I have uninstalled my display manager for various reasons. One of the reasons for that was to have a better understanding of my login process. Display managers generally do arbitrary stuff, they have their own way/order of sourcing files, sometimes they skip important files and sometimes they source unnecessary stuff etc. So I thought, using no display manager would result in a simpler login model. Oh boy, I was wrong.

After quite a bit of fiddling around I believe I have a clear mental model of in which order my files gets sourced. Before that, here is a quick summary of some terminology:

[non-]Login shell
It’s first thing that is executed when you log in to your system. It generally spawns your DE/WM.
[non-]Interactive shell
It’s the shell that you use interactively, shells that you type into.

When you log into a TTY, you get a interactive login shell. When you spawn a new terminal in your DE/WM, you get a non-login interactive shell. When you log in to your DE directly, that is done through a non-interactive login shell. A script that you run is run under a non-login non-interactive shell. These are important because this information will guide you why your PATH variable is not working as intended in some cases and why sometimes it works.

I use zsh as my main shell and I also set it to be my login shell. zsh has a much simpler sourcing hierarchy, I’m not even going to bother myself explaining how bash does it. So here is how it goes:

  • First, /etc/zshenv and then ~/.zshev is sourced. This is done no matter what type of shell you are spawning. These files gets sourced every time you are spawning a shell.
    • /etc/zshenv
      • This file might be under /etc/zsh/zshenv instead of /etc/zsh even if man zsh says the exact opposite. Some distributions change this path for zsh and they forgot to update the manual.
      • I don’t recommend editing this file unless you really know what you are doing. Just try to utilize files under your home directory, instead of global ones.
    • ~/.zshenv
      • As this file gets sourced every time a new shell spawns, it makes sense to put all the frequently updated stuff here.
      • Also you probably want to set all the environment variables that is frequently used by other programs ($PATH, $PAGER etc.).
      • Do not put any aliases here, aliases are meant to be used within an interactive shell.
  • If you are spawning a login shell, /etc/zprofile gets sourced at this point, and then ~/.zprofile. Just to make it clear, these files are only sourced at login and never again (if you somehow spawn a login shell again, of course it will be sourced again).
    • /etc/zprofile
      • Again, this file might be under /etc/zsh/zprofile instead of /etc/zprofile.
      • Most of the distributions have the following line inside the /etc/zprofile file:
        emulate sh -c 'source /etc/profile'
        • This file sources /etc/profile file (which automatically gets sourced if you are using sh or bash as your login shell.), which in turn sources files under /etc/profile.d. These files are generally gets installed when you install a program into your system. For example, if you install nix package manager, it also installs /etc/profile.d/nix{,-daemon}.sh files and this files needs to be sourced for nix to work properly. So if your /etc/zprofile or /etc/zsh/zprofile does not contain this line, add it to your ~/.zprofile file.
    • ~/.zprofile
      • This file is a nice place to add stuff that does not change very often, as this file is only sourced at login once.
      • You can also do some long running initializations here, as this file gets sourced only once.
      • Also this is the file that you want to run startx, if you are not using a display manager like me. Here is how I do it:
        # Following automatically calls "startx" when you login on tty1:
        if [[ -z ${DISPLAY} && ${XDG_VTNR} -eq 1 ]]; then
            # Logs can be found in ~/.xorg.log
            exec startx -- -keeptty -nolisten tcp > ~/.xorg.log 2>&1
        • After running startx, it loads ~/.xinitrc file. This is the file that you want to start your window manager and all the other programs that you want to see at startup. An example file might be:
          # The following is essential, you need to source
          # every file under `/etc/X11/xinit/xinitrc.d`.
          if [[ -d /etc/X11/xinit/xinitrc.d ]] ; then
              for f in /etc/X11/xinit/xinitrc.d/?*.sh ; do
                  echo "Sourcing $f"
                  [[ -x "$f" ]] && . "$f"
              unset f
          [[ -f $sysresources ]] && xrdb -merge $sysresources
          [[ -f $sysmodmap ]] && xmodmap $sysmodmap
          # Load your keymap and all that jazz
          setxkbmap 'us(intl)'
          xrdb -merge $HOME/.Xresources
          xmodmap $HOME/.Xmodmap
          # Cursor
          xsetroot -cursor_name left_ptr
          # Starting programs like your compositor makes sense here
          picom -b
          # Some other programs related to X
          unclutter &
          redshift &
          # Run your WM, as an example I run bspwm
          exec bspwm
  • Now, /etc/zshrc (or /etc/zsh/zshrc) and then ~/.zshrc gets sourced, if you are spawning an interactive shell.
    • This is the file that you want to dump all of your aliases, zsh functions that you want to use interactively, your shell theme/plugins etc.
  • If you are spawning a login shell, now /etc/zlogin (or /etc/zsh/zlogin) and then ~/.zlogin gets sourced.
    • I don’t use this file.
    • Don’t use this file if you are already using ~/.zprofile or move all of your ~/.zprofile into this one.
    • Only difference is this file is sourced after ~/.zshrc (if it’s sourced at all) and it doesn’t make much sense (to me).

A use case

A problem I was having was related to unison. It’s a simple and powerful file synchronization program that I use. One problem with that is, it’s quite picky about it’s version. It requires absolute same version on both client and server, and not only that, it also wants both binaries to be compiled with same version of the OCaml compiler. To solve this kind of version issue between my rigs, (among other reasons) I use nix package manager. This gets me same binary on every computer I have. But there is one problem, when I install unison through nix, it installs it under ~/.nix-profile/bin/unison, and when I run unison to synchronize my files, it fails to find unison on the other computer. But I can ssh into other computer and run unison without a problem. Hmm, what is going on here?
  • I also gave this example above, when nix gets installed it also installs the following files: /etc/profile.d/nix{,-daemon}.sh
  • These files update $PATH variable, so ~/.nix-profile/bin gets added into $PATH.
  • This files are sourced through /etc/profile, which is sourced by /etc/zsh/zprofile.
  • /etc/zsh/zprofile gets sourced when a login shell is spawned.

So when I do ssh my-machine, it spawns an interactive login shell and drops me into it. By this time, /etc/zsh/zprofile is already sourced and thanks to that I can simply run unison binary. When I run unison on client, it connects the server through ssh but it uses a non-login non-interactive shell while doing that. Same goes for this command ssh my-machine which unison. This command runs in a non-login non-interactive shell. Because /etc/zsh/zprofile requires a login shell to get sourced, ~/.nix-profile/bin never gets added to the $PATH variable.

So what’s the solution? You know it, only ~/.zshenv gets sourced when non-login non-interactive shell is spawned. So I just update the $PATH there and everything works as expected now.

Additional resources

Shell startup scripts
I found this great article that summarizes what is going on at startup for different shells with beautiful graphics. It proves me right about using zsh instead of bash.


That’s it. This was just some kind of a braindump. From now on, I’ll just try to drop my notes as blog posts like this.

Killing (copying) currently selected candidate’s content/text in Selectrum

I use Selectrum as my incremental narrowing framework in Emacs with Consult. Consult has some nice commands, like consult-grep (or better yet, consult-ripgrep). I always find myself doing a project wide search to find a line to copy and paste it into my current buffer. This became quite repetitious and I automated this with the following function:
(defun isamert/selectrum-kill-current-candidate ()
  "Kill current candidates text in selectrum minibuffer and close it."
  (let ((candidate (selectrum-get-current-candidate))
        (prompt (minibuffer-prompt)))
      ((s-contains? "grep" prompt) (s-join ":" (-drop 2 (s-split ":" candidate))))
      ;; ^ Strip `filename:line-number:` from the text
      ((s-matches? "\\(Go to line\\|Switch to\\)" prompt) (substring candidate 1))
      ;; ^ `consult-line' and `consult-buffer' has an unrecognizable char at
      ;; the beginning of every candidate, so I just strip them here
      (t candidate))))

This function essentially kills the currently selected candidate’s text and closes the minibuffer. Then you can yank the text anywhere you want. You may also want to change the code in a way that it directly yanks the text into the buffer (to achieve that, simply replace kill-new with insert) but I like to kill it first and yank it manually.

To bind it to a key, use the following:

(define-key selectrum-minibuffer-map
  (kbd "M-y") #'isamert/selectrum-kill-current-candidate)

It works with all kinds of Selectrum completion commands. See the following gif:


For embark users

I found out that embark already provides this sort of feature. So you can call embark-act (which you should do via a keybinding) when you are on a candidate in selectrum and then hit w (which calls embark-save action). This will save the current candidate’s string into your kill-ring. If you are an embark user this is also a viable option, but I don’t like this because as you may have seen in the code above I do some post-processing to the string before saving it into my kill-ring and it’s not conveniently possible in this case. Instead of using embark-save action, you can add isamert/selectrum-kill-current-candidate function as an embark action.
Another update
Found out that if you install embark-consult, the weird character problem goes away while running embark-save and embark-insert functions. But still, for grep buffer it inserts/copies the file-path:line-number.

Managing your contacts in org-mode and syncing them to your phone (Android, iOS, whatever)

I store my contacts in an org file called The file has the following structure:
* John Doe
:ID: some-generated-uuid
:GROUP:    Work
:PHONE:    +1234567890
:ADDRESS_HOME: Foo bar street, no 5
* Dohn Joe
:GROUP:    High school
:PHONE:    +1334567890
- Some notes about this person.

The nice part is, it’s just plain org-mode. I only use top-level headings in this file, instead of creating header hierarchies. I utilize :GROUP: property to categorize people, this way a person may belong to multiple categories. I use org-ql if I need to find people related to one group or if I want to filter them based on some specific property.

However the main use case is that I reference these headers in my other org files. For example, I also keep my diary in org-mode and I may write about some event that I participated with John Doe from above. I simply reference (using org links) to that person. The benefit of this is being able to recollect all of your notes about a particular person using one simple search.

Anyway, lets jump how I synchronize these contact information with my phone.


I simply create .vcf file, a format that most of the contacts apps that are aware of, based on my file. Then I synchronize this .vcf file to my phone, using Syncthing. The following snippet creates the .vcf file.
(defun isamert/build-contact-item (template-string contact-property)
  (if-let ((stuff (org-entry-get nil contact-property)))
      (concat (format template-string stuff) "\n")

(defun isamert/vcard ()
  "Create a .vcf file containing all contact information."
     (lambda ()
          ,(format "UID:urn:uuid:%s\n" (org-id-get nil t))
          ,(isamert/build-contact-item "FN:%s" "ITEM")
          ,(isamert/build-contact-item "TEL;CELL:%s" "PHONE")
          ,(isamert/build-contact-item "EMAIL:%s" "EMAIL")
          ,(isamert/build-contact-item "ORG:%s" "GROUP")
          ,(isamert/build-contact-item "ADR;HOME:;;%s" "ADDRESS_HOME")
          ,(isamert/build-contact-item "ADR;WORK:;;%s" "ADDRESS_WORK")
          ,(format "REV:%s\n" (format-time-string "%Y-%m-%dT%T"))
    "Where to save the .vcf file?"

Simply call the isamert/vcard function in your file and you get a .vcf file. By default, it creates the file under ~/Documents/sync. This folder is automatically synced with my phone using Syncthing. Then I open my contacts app and import the file. That’s it.

I used to earliest possible .vcf format that is available so that every contacts app can import them. You can add/remove fields to your .vcf export quite easily, just take a look at this wikipedia page for vCard and the relevant line to your function.

Appendix: Interactively copy email of a contact from anywhere in Emacs

Here is an example, just to demonstrate how you obtain/copy email of one of your contacts interactively. A use case might be:
  • You open your mail client to send an email to John Doe.
  • You call isamert/contacts-select-email which presents you all of your contact’s names.
  • You select one of your contacts, and their email gets copied into your kill-ring.
  • You paste that email into To: field of your email client.

Don’t forget to point find-file-noselect to your file.

(defun isamert/contacts-email-alist ()
  "Get an alist of contact name and emails."
   (lambda (it) it)
    (lambda ()
      (when-let ((email (org-entry-get nil "EMAIL")))
        `(,(org-entry-get nil "ITEM") . ,email)))

(defun isamert/contacts-select-email ()
  "Search through your contacts interactively and copy their email."
  (with-current-buffer (find-file-noselect "~/Documents/notes/")
    (let ((email-alist (isamert/contacts-email-alist)))
         (completing-read "Copy email of: " email-alist)


  • Reddit discussion
  • [2022-01-29 Sat] Added UID section to entries so that when you re-import your contacts after an update to the vcf file, already-existing contacts won’t get duplicated.

Migrating my IMDb ratings list and watch list into org-mode


I have been using org-mode to keep track of my movie ratings and critiques. Org-mode, combined with org-ql, gives me quite a lot flexibility. I’m also utilizing Zettelkasten-like backlinking and references, so keeping these rating information on org-mode makes sense for me. First, let me show you how it looks like:


And this is how a movie with it’s properties looks like:


I mainly automated the needed information-retrieval using the orgmdb.el, a package that I wrote. The link contains the related information for automatically filling this data. Before all that, I was using IMDb to log my ratings only. So I just wanted to migrate those ratings into my new org-mode based watch/rating list.

Exporting the data from IMDb

Log in to IMDb and open this link. Hit the 3-dot icon on top-right and click Export.


This will give you a file named ratings.csv.

Parsing the data

I just searched for elisp csv and used the first package that came across. I did not want to split lines with “,” because there are some quoted texts in the csv and that might contain commas itself. Using a library that handles these cases are just better.
(use-package parse-csv
  :ensure t)

With the following we can parse the data into =’((movie1 properties …) (movie2 properties …))=:

    (insert-file-contents "~/Downloads/ratings.csv")
  ?\, ?\" "\n"))

Generating the org-mode rating list

I like using dash.el functionality for these kind of one-off scripts, it’s very convenient and easy to write. With the following, we can convert the data in the ratings.csv into our custom org mode format.
(->> my-movie-data
  ;; Skip the CSV header
  (-drop 1)
  ;; Skip empty lines etc.
  (--filter (cdr it))
  ;; Format all movies into the format I use in my watchlist
  (--map (format
          "** DONE %s (%s) :%s:\n:PROPERTIES:\n:GENRE:    %s\n:RUNTIME:  %s\n:DIRECTOR: %s\n:RATING:   %s\n:WATCHED:  %s\n:IMDB-ID:  %s\n:END:"
          (nth 3 it)
          (nth 8 it)
          (nth 5 it)
          (nth 9 it)
          (format "%s mins" (nth 7 it))
          (nth 12 it)
          (nth 1 it)
          (format "[%s]" (nth 2  it))
          (nth 0 it)))
  ;; Reduce everything into one single string
  (--reduce (format "%s\n%s" acc it))
  ;; Copy the string

This will format all movies into the format I just showed you above and copy the resulting string into your clipboard, so that you can paste it into your watch list file. Feel free to change the formatting to your liking.

Appendix: Getting the movie country data

I was not satisfied with the above result, because I also like to have a :COUNTRY: field in movies property list so that I can filter based on country etc. As ratings.csv does not provide this information, I had to use the orgmdb package I mentioned earlier.

First, I needed to format the data in the ratings.csv into something like this:

 '(("tt1010048" . (7 2016-08-30))
   ("tt0101540" . (6 2017-03-13))
   ("tt1019452" . (8 2019-11-29))

…using this command and then doing a bit of manual work:

cat ratings.csv | awk -F, '{print "(\"" $1 "\" . (" $2, "\""$3 "\"))"}'

Because I only need the rating I gave to the movie and the date I gave it from the file, I’ll get the rest using orgmdb:

(setq isamert/movies
       (orgmdb :imdb (symbol-name (car it)))

This may take a few minutes depending on how much movies you have in your list. I had to wait couple of minutes for ~500 movies. Now that we have all information we need, we can generate our custom org rating list:

(->> isamert/movies
  (--map (let* ((info (cdr (assoc-string (alist-get 'imdbID it) isamert/movie-rating-list))))
           `(,@it (MyRating . ,(car info)) (MyRatingDate . ,(cdr info)))))
  (--map (format "** DONE %s (%s) :%s:\n:PROPERTIES:\n:GENRE:    %s\n:RUNTIME:  %s\n:DIRECTOR: %s\n:COUNTRY:  %s\n:RATING:   %s\n:WATCHED:  %s\n:IMDB-ID:  %s\n:END:"
                 (alist-get 'Title it)
                 (alist-get 'Year it)
                 (alist-get 'Type it)
                 (alist-get 'Genre it)
                 (alist-get 'Runtime it)
                 (alist-get 'Director it)
                 (alist-get 'Country it)
                 (alist-get 'MyRating it)
                 (format "[%s]" (alist-get 'MyRatingDate it))
                 (alist-get 'imdbID it)))
  (--reduce (format "%s\n%s" acc it))


Ekşisözlük, gündem, dikkat dağınıklığı ve Emacs

Ekşi kullanımımı azaltmaya çalışıyorum-hatta hiç girmemeye-fakat bu pek mümkün olmuyor. Nihayetinde ekşi’ye girişimin temel sebebinin gündem‘e bakmak olduğunun farkına vardım. Gündem içerisinde de en çok merak ettiğim şey genel itibarıyla en çok entry girilen başlıkların neler olduğunu görmek. Fakat ekşi’de gündem tam olarak entry sayısına göre sıralanmıyor ve bu durum benim için sinir bozucu. Gündemi oluşturan 3-5 başlığa bakıp çıkmak isterken dikkatim diğer başlıklarla dağılıyor veya en çok entry girilen başlıkları bulayım derken zaman kaybediyorum. Zaten çoğu zaman entry’leri de merak etmiyorum, sadece bi başlıkları gözden geçirmek istiyorum. Bunun için çözümüm şu oldu:
curl --silent | grep '?a=popular' | sed -E 's/[ ]*href="(.*)">(.*) <small>(.*)<\/small>(.*)/(\3) \2/' | sort -V -r | uniq

2021-05-16 23:28 itibarıyla bu komutun çıktısı şöyle:

(696) 16 mayıs 2021 sedat peker açıklamaları
(675) 2020-2021 sezonu şampiyonu beşiktaş
(567) 16 mayıs 2021 içişleri bakanlığı genelgesi
(553) beşiktaş jean-claude billong skandalı
(482) fatih terim
(463) 22-23 mayıs 2021 yasak protestoları
(339) aynı anda 24 kızla sevgili olan öğretmen
(332) hafta sonu sokağa çıkma yasağı

Bir ihtimal başlığı açıp bir iki entry’e bakmak istersem diye de bunu interaktif bir Emacs fonksiyonuna sardım, o da şöyle:

(defun isamert/eksi-gundem-sirali ()
  "Eksi gundemini entry sayisina gore sirala ve `completing-read' yap."
  (let* ((selectrum-should-sort nil)
         (results (->> (shell-command-to-string "curl --silent | grep '?a=popular' | sed -E 's/[ ]*href=\"(.*)\">(.*) <small>(.*)<\\/small>(.*)/(\\3) \\2|||\\1/' | sort -V -r | uniq")
                    (s-split "\n")
                    (--map (s-split "|||" it))
                    (--map `(,(car it) . ,(cadr it))))))
    (->> results
      (completing-read "Baslik: ")
      (funcall (-flip 'assoc-string) results)
      (format "")

O da böyle gözüküyor:


Browser üzerinde de sol frame’i ve ekşişeyler frame’ini uBlock origin aracılığıyla engelledim. Bu sayede entry okumak için girdiğimde sağdan soldan fırlayan şeyler dikkatimi dağıtmıyor. Umarım gelecekte bir vakit hiç girmem şu siteye.

Dealing with APIs, JSONs and databases in org-mode

I deal with web API’s quite a lot in my daily job. I use org-mode and ob-http to make requests and display their results. See this:

#+begin_src http :pretty

: {
:   "userId": 1,
:   "id": 1,
:   "title": "delectus aut autem",
:   "completed": false
: }

Hitting C-c C-c on the first block will make a get request to the given URL and it will paste the results into the #+RESULTS: part. This is quite cool, and pretty good for quickly prototyping stuff right inside the org-mode. You can build a quite nice workflow around this if you are dealing with API’s a lot.

An improvement that you can apply to this is wrapping the result into an JSON block so that you can get JSON highlighting and other goodies. Let’s see how we can do that:

#+begin_src http :pretty :wrap src json

#+begin_src json
  "userId": 1,
  "id": 1,
  "title": "delectus aut autem",
  "completed": false

Now we have nice syntax highlighting, thanks to the :wrap src json parameter on the first line.

Another good thing you can do is manipulating the result. ob-http offers a very convenient way to do this:

#+begin_src http :pretty :select .title

: delectus aut autem

It simply pipes the result of the request into jq with the value you provided to :select in the header of the code block. This is especially good if you want to pipe this result into another code block. You can give a name to this code block by putting #+name: todo-title right above the code block and pass the result of this code block into other code blocks by adding :var TODO-TITLE=todo-title into their block headers etc. Quite convenient! You can also utilize noweb syntax if you want to get fancy.

There is one little problem with the :select approach though: If you are dealing with big JSONs and you want to explore the JSON or try things out by changing the :select parameter, you send the same request over and over again. This is not cool for many reasons. So how do we fix this? We can implement an executer function for JSON blocks. See how it works in action:

#+begin_src http :pretty :wrap src json

#+begin_src json :jq .title
  "userId": 1,
  "id": 1,
  "title": "delectus aut autem",
  "completed": false

: delectus aut autem

First I hit C-c C-c on the http block and it outputs the JSON response. Then I add the :jq .title part to the resulting JSON block and hit C-c C-c on it and it outputs the result of my jq expression. Like the :select parameter of ob-http, the :jq parameter for the JSON block simply pipes the JSON into jq with given jq expression. I used :jq instead of :select as the parameter name because my custom executer also supports manipulating the JSON using node. See this:

#+begin_src json :node it.title.toUpperCase()
  "userId": 1,
  "id": 1,
  "title": "delectus aut autem",
  "completed": false


The :node parameter takes arbitrary JavaScript code and runs it using the node binary. The variable it represents the whole JSON. And here is the implementation for this executer:

(defun org-babel-execute:json (body params)
  (let ((jq (cdr (assoc :jq params)))
        (node (cdr (assoc :node params))))
        ;; Insert the JSON into the temp buffer
        (insert body)
        ;; Run jq command on the whole buffer, and replace the buffer
        ;; contents with the result returned from jq
        (shell-command-on-region (point-min) (point-max) (format "jq -r \"%s\"" jq) nil 't)
        ;; Return the contents of the temp buffer as the result
        (insert (format "const it = %s;" body))
        (insert node)
        (shell-command-on-region (point-min) (point-max) "node -p" nil 't)

Simple, isn’t it? Just to summarize, here is what is going on:

  • If you want to implement a custom executer for an arbitrary source code block, you need to create a function named org-babel-execute:$SRC_BLOCK_LANG_NAME.
    • This function takes two parameters, the body of the block and an alist of parameters passed to the header of the source block.
    • You need to return the result body from the function.
  • For piping some arbitrary text into a binary, use shell-command-on-region in combination with with-temp-buffer. It’ll save you from a lot of trouble, like escaping quotes and all the other shell quirks.

Note that you can fold a source block by hitting TAB. With that in mind, you can use this executer as a live JSON playground, change your expression, fold the codeblock to hide the clutter, hit C-c C-c, see the result, and repeat.

Custom executers for custom modes

Our executer function example is for json-mode, an already-existing major mode. You can also create arbitrary major modes, create executers for them and you can start using them in your org-mode documents right away. Here is another example case: We use Couchbase quite a lot at work and I have bunch of queries saved in org-mode documents. It would be good to have an ob-n1ql package (N1QL is Couchbase’s SQL-like language) that let’s me run Couchbase queries right inside org-mode. I looked it up found no N1QL mode for Emacs, let alone a package like ob-n1ql for org-mode. But it was quite easy to roll my own, see this:
  • First I created a derived major mode named n1ql-mode. N1QL is just like SQL, so I simply extended it. This way we get syntax highlighting and bunch of other stuff for free.
(define-derived-mode n1ql-mode sql-mode "n1ql-mode")
  • Then I created a function that executes given N1QL query using the cbq command line tool that Couchbase provides:
(cl-defun isamert/cbq (query &key host username password (select "."))
"Run a couchbase query and return the result."
  (insert query)
   (format "cbq -quiet -engine %s -credentials '%s'"
           (format "%s:%s" username password))
   nil t)
  ;; Do some cleaning up
  (replace-regexp-in-region "^cbq> " "" (point-min) (point-max))
  ;; N1QL returns a JSON response, so it might be a good idea to
  ;; provide a way to filter the result with jq, like what ob-http
  ;; does with it :select parameter
  (shell-command-on-region (point-min) (point-max) (format "jq -r %s" select) nil t)
  • And finally, an executer function for N1QL mode, so that we can run our queries right inside org-mode:
(defun org-babel-execute:n1ql (body params)
   :host (alist-get :host params)
   :username (alist-get :username params)
   :password (alist-get :password params)
   :select (alist-get :select params)))

…and this is how you would use it:

#+begin_src n1ql :host DB_HOST :username DB_USERNAME :password DB_PASSWORD
  SELECT * FROM SomeTable LIMIT 10;

You can turn any REPL/CLI tool into a language that can be executed right inside an org-mode document. This brings you the benefit of having interactive notes. Your learning environment and testing environment would be same and this let’s you progress quicker. I even do production troubleshooting inside org-mode documents, so that at the end of the day I have a clear document that shows exact runnable steps to solve a problem.

Duyuru: sozluk.el, Emacs için çevrimiçi sözlük uygulaması

Görünüşe göre az da olsa RSS takipçilerim var ve belki aralarından bazıları Türkçe okuyabiliyordur:

Her şeyi Emacs’e taşıma hastalığımın sonucu olarak bir akşamüstü yazdığım birkaç fonksiyondan oluşan sozluk.el, ve API’larını kullanarak çalışan bir sözlük paketi. Sonuçlarını güzelce formatlanmış bir org-mode buffer’ında gösteriyor. README dosyasında çeşitli demo gif’leri bulunmakta. İlgilenenlere duyurulur.


API’ların kullanım hakları ile alakalı bahsi geçen sitelerde herhangi bir bilgilendirme bulunmuyor. Sadece 3-5 kişinin kullanacağı bu paketten ötürü de üzerime geleceklerini sanmıyorum fakat herhangi bir uyarı almam neticesinde muhtemelen yayından kaldıracağım bir paket olacak.

Global interactive Emacs functions

While I spend a good chunk of my days staring at an Emacs window, sometimes I (unfortunately) need to switch to other applications. If I want to call an Emacs function, I need to return back to Emacs, call the command and go back on what I was working. While sometimes justifiable, this is too much work if you are doing this frequently. You can utilize emacsclient for situations like this. Start Emacs as a daemon or call (server-start) after starting Emacs. Now you can do this:
emacsclient --eval "(arbitrary-elisp-code)"

It’ll simply execute the elisp code you’ve just supplied. Using a tool like sxhkd, you can bind any key to this command and call it outside of the Emacs window without a problem. This is fine if your command does not require user interaction. For example, I use empv to consume multimedia. I can bind the following command to a global key to get a basic pause/resume functionality outside of the Emacs:

emacsclient --eval "(empv-toggle)"

For interactive commands, you need to switch to an Emacs window (or spawn a new one) and call the command. While there is no way around this for certain complex commands, we can do better for simpler ones. Let’s take (empv-play-radio) command as an example. It shows a list of radio channels through completing-read, expects you to select one and it’ll start playing the selected one. Switching to Emacs window just to select a radio channel is too much and the following will not help much, as it will just show the completing-read interface on an already existing Emacs window:

emacsclient --eval "(empv-play-radio)"

But the following will show the radio channels using rofi (or choose, if you are on macOS) in wherever you’ve called it:

emacsclient --eval "(isamert/globally (empv-play-radio))"

Here is how the command looks like without wrapping it with (isamert/globally):


Here is how it looks when you wrap it with (isamert/globally ...), using default rofi config:


This is how it looks like on macOS with choose:


isamert/globally is a pretty simple macro that overrides completing-read-function for the current running context.

(defmacro isamert/globally (&rest body)
  `(let ((completing-read-function #'isamert/dmenu))

isamert/dmenu is a little bit more complex but what it essentially does is that it acts like completing-read but uses system-level tools like rofi or choose to do that and it returns selected thing just as default completing-read does.

(defun isamert/dmenu (prompt items &rest ignored)
  "Like `completing-read' but instead use dmenu.
Useful for system-wide scripts."
       ((functionp items)
        (funcall items "" nil t))
       ((listp (car items))
        (mapcar #'car items))
      (string-join "\n")
     (pcase system-type
       ('gnu/linux (format "rofi -dmenu -fuzzy -i -p '%s'" prompt))
       ('darwin "choose"))
     nil t "*isamert/dmenu error*" nil)
    (string-trim (buffer-string))))

While it is easy to implement this for completing-read (because it uses a variable called completing-read-function to do the real lifting), it is not that easy to convert a function like read-string to a global one that works outside of Emacs. But we can still do something. Let’s make isamert/globally support read-string too.

First I’m just going to define our system level read-string alternative:

(defun isamert/system-read-string (prompt)
  "Like `read-string' but use an Emacs independent system level app
to get user input. You need to install `zenity'."
    (format "zenity --entry --text='%s'" prompt))))

Then I’m going to add an around advice to read-string:

(defvar isamert/defer-to-system-app nil)

(define-advice read-string (:around (orig-fun prompt &rest args) defer-to-system-app)
  "Run read-string on system-level when `isamert/defer-to-system-app` is non-nil."
  (if isamert/defer-to-system-app
      (isamert/system-read-string prompt)
    (apply orig-fun prompt args)))

With this advice, read-string will use isamert/system-read-string to get the user input when the variable isamert/defer-to-system-app is non-nil. We set this variable to nil by default so that none of our functions that uses read-string is effected by this chance. Now, the last part:

(defmacro isamert/globally (&rest body)
  `(let ((completing-read-function #'isamert/dmenu)
         (isamert/defer-to-system-app t))

We updated isamert/globally to set isamert/defer-to-system-app to t for the current running context. This way the advice we added to read-string kicks in and changes the default behavior with our isamert/system-read-string function.

Just to make this concrete, here is how it looks:


Now, if you want to extend isamert/globally, you need to define an around advice for the function that you want to provide an system level alternative and make it dynamically select either the default implementation or the system-level alternative based on the isamert/defer-to-system-app variable. This works nicely for simple use cases, like getting a string from user, making user to select a string from list of strings etc. but I don’t think what’s beyond that is sustainable. If this seems useful, I can turn it into a package. Currently I use it with empv.el, my custom password manager, and for interactively selecting inserting my yankpad snippets to other programs.

Let me know if this seems useful or if you have an improvement over what I described above, I would love to hear more use cases for this!

Typing (unicode) characters programmatically on Linux and macOS

I’ve been using KMonad on my Linux boxes and on my Mac machine which I do use for work. KMonad is currently the best solution to have a consistent keyboard remapping across different operating systems and different keyboard types/layouts. It’s not very well explained, but you can check out my KMonad configuration here.

One thing I also utilize KMonad for is typing unicode characters. I use them while coding or even casually messaging with people. They are more expressive and better looking. The usual way of inserting unicode characters is using the compose key on your operating system. KMonad also utilizes it, take a look at here to learn more about it. I don’t use this approach because it’s janky and limiting:

  • It’s complicated. You need to generate a file with compositions a priori and need to configure KMonad based on that.
  • It does not work reliably for me.
  • It does not work on Mac.

My solution is to use an external program to type for me. For example on X11 systems, you can use xdotool to write a character or multiple characters for you:

xdotool type "qwe"

When you run this, it literally behaves like you hit q, w and e on your keyboard. But xdotool is also not reliable when it comes to typing unicode characters. Most reliable and extendable solution I found is described in this post. pynput is great and it works! I created this script based on the answer I linked:

#!/usr/bin/env python

import sys
from pynput.keyboard import Controller

Controller().type(' '.join(sys.argv[1:]))

Save it into a file named xtype and you can do the following:

xtype "λ"

…and it will properly type “λ”. It also works quite fine on Mac but I found it a bit slower. So here is a faster solution for Mac (which I simply copied from here but simplified the compiling process a bit for you):

#import <Foundation/Foundation.h>
// Following import may or may not be needed. It does not build on M1
// Pro without this but it was building on 2019 Pro.
#import <CoreGraphics/CoreGraphics.h>

int main(int argc, const char * argv[]) {
  @autoreleasepool {
    if (argc > 1) {
      NSString *theString = [NSString stringWithUTF8String:argv[1]];
      NSUInteger len = [theString length];
      NSUInteger n, i = 0;
      CGEventRef keyEvent = CGEventCreateKeyboardEvent(nil, 0, true);
      unichar uChars[20];
      while (i < len) {
        n = i + 20;
        if (n>len){n=len;}
        [theString getCharacters:uChars range:NSMakeRange(i, n-i)];
        CGEventKeyboardSetUnicodeString(keyEvent, n-i, uChars);
        CGEventPost(kCGHIDEventTap, keyEvent); // key down
        CGEventSetType(keyEvent, kCGEventKeyUp);
        CGEventPost(kCGHIDEventTap, keyEvent); // key up (type 20 characters maximum)
        CGEventSetType(keyEvent, kCGEventKeyDown);
        i = n;
        [NSThread sleepForTimeInterval:0.004]; // wait 4/1000 of second, 0.002 it's OK on my computer, I use 0.004 to be safe, increase it If you still have issues
  return 0;

Save this into a file named xtype.m and compile it with the following command:

clang -framework Foundation -framework ApplicationServices xtype.m -l objc -o xtype

Now, again, you can do:

xtype "λ"

…and it will properly type “λ”, but a bit faster this time. This might be a bit overkill, you can still make use of pynput on Mac and call it a day.

Now you can use this xtype command on KMonad (see the tutorial.kbd to learn about running commands with (cmd-button "...")), or using your keybinding manger like sxhkd etc.

Unfortunately, pynput does not work on a pure Wayland session at the moment but there is an issue about it that you can track. Meanwhile, you can try wtype instead. It works with unicode characters, as stated in the README but this tool does not work on GNOME or KDE Wayland sessions. I haven’t been able to find a good solution for Wayland but that is the situation for quite a lot of things in Wayland, at least for now.

A simple new tab page (that works with TreeStyleTab)

I use TreeStyleTab to manage my tabs. It’s quite powerful, it has quite a lot of extensions to expand it’s functionality even more, but it’s missing one little piece: named tab groups. You can group tabs under another tab but you can’t create dummy tabs with a specific name. One workaround is that using a blank tab to group tabs under that and rename tabs title by using ~document.title = “THE TITLE YOU WANT”~. This is well but if you accidentally refresh this tab or restart your Firefox session, your tab name will be gone.

A better solution is creating a “tab group” and renaming it. A tab group can be generated by visiting the address ext+treestyletab:group. You can rename the group name by clicking the “Group” text on the page. It’ll store tab name as a URL parameter, so that it will not get lost. See this issue to get more information.

I don’t use this solution because ext+treestyletab:group is an extension page and other extensions can not work within this page. I created an HTML file that you can use to fix this situation. It also uses URL parameters to save the title. Here is the code.

I quite like tabliss and I was using it as my new tab page. My newtab.html also works similarly, gets a random picture from Unsplash and shows it on your page. You can even use the bookmark tree functionality and when restored the tab title will stay there! (Also see below for the managed version of this file, so you don’t need to hassle with setting it up.)

You need to get a API key from Unsplash to make it work and add it to the code. A demo app API key would serve you more than well, you don’t need to upgrade it.

Here is how it looks like on my machine:


You can click and edit the title on the page directly. If you choose to not to show the title in the page, you can still edit the ?title=Your Title parameter as you wish.

Couple thins to note about the file:

  • Use ?title=THE TITLE YOU WANT to set the title or simply click the title on the page and edit it.
  • You can take notes and notes will be attached to the title and will be saved to localStorage. Click a bit below the clock, this is where note area is kept. (See the picture below)
  • It’ll do a new API call to Unsplash to get a new image every time you refresh or open a new page.
    • You can add &cache=true to get a random already seen image without doing an API call. It’ll select a random image from already shown images. It keeps these URLS in the localStorage with the cachedImages key.
    • If you hit the API limits it’ll start using cached image URLs, like explained above.
  • You can export/import your configuration and notes, see the bottom of this page.
  • There are some customizations you can do, see the beginning of the <script> tag and set them to your liking. Or see bottom of this page to see how you can set these settings without editing the file.


I also uploaded this page to my website, you can use it if you don’t want to deal with downloading etc.. It does not do any network calls (except for the Unsplash of course). You still need to get an Unsplash API key. After getting the API key, open the console (F12) and execute the following:

localStorage.setItem("unsplashApiKey", "YOUR_API_KEY_HERE")

You can also manage other settings like this:

// Change true with false if you don't want to see them, these are the defaults:
localStorage.setItem("showClock", true)
localStorage.setItem("showTitle", true)
localStorage.setItem("showNotes", true)

// How many seconds before fetching a new image? (Following sets to 5 minutes)
// Setting it to false causes every refresh to fetch a new random image.
localStorage.setItem("newImageInterval", 5 * 60);

There are also two other functions that you can use:

// This exports your configuration and notes into a JSON file.

// Import an already exported configuration and notes. (You may need
// to click the page first and then run this quickly, otherwise it may
// give you an error complaining about user interaction)

Publish Code

(require 'org)

;; Variables

(defvar isamert/blog-publish-path
  "A (relative) directory name.  All the generated files will be exported under this directory.")

(defvar isamert/blog-url
  "URL of the blog.")

(defvar isamert/blog-title
  "Title of the blog, used while generating RSS etc.")

(defvar isamert/blog-description
  "My notes and ideas."
  "Description of the blog, used while generating RSS etc.")

(defvar isamert/blog-local-port
  "Port to serve local blog instance.")

(defvar isamert/blog-rss-per-tag
  "Whether to generate RSS per tag.
Non-nil value means generating `TAG.xml' files under given
directory.  By default it'll create `feed/TAG.xml' for every

;; Internal vars

(defvar isamert/blog-server-process
  "The process running server.")

(defvar isamert/blog-templates
  "Template name-value pairs.")

;; Main logic

(defun isamert/blog-export-all ()
  "Export/render current file as a whole blog."
  (setq org-html-html5-fancy t
        org-html-doctype "html5")

(defun isamert/blog-export-all-posts-pages ()
  (org-map-entries 'isamert/blog-export-current "LEVEL=1"))

(defun isamert/blog-export-current ()
  "Export/render the current header."
      (let* ((title (org-entry-get nil "ITEM"))
             (publish-date (org-entry-get nil "PUBLISH_DATE"))
             ;; ^ The post path will be determined based on this.
             (update-date (org-entry-get nil "UPDATE_DATE"))
             ;; ^ When the post is updated last.
             (tags (or (org-entry-get nil "TAGS") ""))
             (is-page (string-match-p "page" tags))
             ;; ^ Is this a page or post?
             (template (or (org-entry-get nil "TEMPLATE")
                           (if is-page "page-template" "post-template")))
             ;; ^ The template can be overridden by "TEMPLATE" property
             (export-path (org-entry-get nil "EXPORT_AS"))
             ;; ^ The export path can be overridden by "EXPORT_AS" property
             (custom-id (org-entry-get nil "CUSTOM_ID"))
             ;; ^ ...
             (options (org-entry-get nil "OPTIONS"))
             ;; ^ Like #+OPTIONS but for current header only
             (author (isamert/org-get-keyword "AUTHOR"))
             ;; ^ AUTHOR of the current header OR the global #+AUTHOR
             (file-path nil))
        (when (not isamert/blog-templates)
        (when (isamert/blog-can-export-current)

          ;; Update the id after the pre-process
          (setq custom-id (org-entry-get nil "CUSTOM_ID"))
          (setq file-path (isamert/blog-mk-path t))

            :title title
            :body (isamert/body-html)
            :publish-date publish-date
            :update-date update-date
            :author author
            :tags tags))
          `(,title ,export-path ,is-page ,publish-date))))))

(defun isamert/blog-generate-tag-info-current ()
  "Return the list of tags and post information for current item.."
  (when (isamert/blog-can-export-current)
    (cons (--filter (not (s-blank? it)) (s-split ":" (org-entry-get nil "TAGS")))
          `(:title ,(org-entry-get nil "ITEM") :path ,(isamert/blog-mk-path) :abstract ,(or (org-entry-get nil "ABSTRACT") "")))))

;; Processing

(defun isamert/blog-can-export-current ()
  "Return if the current header should be exported or not."
  (not (string-match-p "noexport" (or (org-entry-get nil "TAGS") ""))))

(defun isamert/blog-pre-process ()
  "Pre-process the document/header.
This does some changes on the original document like:
- Set CUSTOM_ID property for each header.  This enables to have
  meaningful header links in the generated pages.  CUSTOM_IDs are
  generated based on the header title, like \"My first post\"
  becomes \"my-first-post\". If you supply the CUSTOM_ID manually
  it'll be used instead."
  (org-map-entries 'isamert/blog-generate-custom-id))

(defun isamert/blog-generate-custom-id ()
  "Create a CUSTOM_ID if the header does not have one.
\"My first post\" will become \"my-first-post\"."
  (when (not (org-entry-get nil "CUSTOM_ID"))
    (org-set-property "CUSTOM_ID" (isamert/blog-url-case (org-entry-get nil "ITEM")))))

(defun isamert/plist-keys (plist)
  (--filter (equal (mod it-index 2) 0) plist))

(defun isamert/plist-values (plist)
  (--filter (not (equal (mod it-index 2) 0)) plist))

(defun isamert/eval-code-with-env (env code)
  (let* ((params (->> env
                   (-map #'symbol-name)
                   (--map (s-chop-prefix ":" it))
                   (s-join " ")))
         (values (->> env
                   (-map #'prin1-to-string)
                   (s-join " ")))
         (eval-code (format "((lambda (%s) %s) %s)" params code values)))
    (eval (car (read-from-string eval-code)))))

(defun isamert/template (template-name &rest args)
   (cdr (assoc template-name isamert/blog-templates))
   (lambda (var plist)
     (if (s-prefix? "(" var)
         ;; WARN:  Here I remove "(" and ")" from the output of evaluated code
         ;; to be able to make lists appear as intended. This may cause  issues
         ;; if output itself is not a list but a string that is wrapped inside paranthesis
         (->> (isamert/eval-code-with-env plist var)
           (format "%s")
           (s-chop-prefix "(")
           (s-chop-suffix ")"))
       ;; Every key in plist starts with ":", hence (concat ":" var)
       (plist-get plist (intern (concat ":" var)))))

(defun isamert/blog-pre-process-pure (backend)
  "This function runs before export procedure and does some changes on the content."
  ;; Show everything first
  ;; Override global #+OPTIONS with current headers OPTIONS property
  (goto-char (point-min))
  (-when-let (opts (org-entry-get nil "OPTIONS"))
    (insert (format "#+OPTIONS: %s\n" opts)))
  ;; Kill the post header, becuase user may have explicitly put post header somewhere else in the template
  (goto-char (point-min))

(defun isamert/body-html ()
    (let ((org-html-htmlize-output-type nil))
      (add-hook 'org-export-before-parsing-hook 'isamert/blog-pre-process-pure)
      (org-html-export-as-html nil nil nil t nil)
      (remove-hook 'org-export-before-parsing-hook 'isamert/blog-pre-process-pure)

;; TODO: fix document
(defun isamert/blog-refresh-templates ()
  "Load the templates according to following rules.
- If there is a codeblock in the document named \"post-template\",
  use it as the default template for posts, otherwise use
  the `isamert/blog-post-template' value as the post template.
- If there is a codeblock in the document named \"page-template\",
  use it as the default template for pages, otherwise use
  the `isamert/blog-page-template' value as the post template.
- Load all codeblocks with their names ending with \"-template\"
  and make them available for use, so that you can set TEMPLATE property
  to a header and it will be exported using that template.  For example,
  assume you have codeblock with \"about-template\" name  and you have
  a header with TEMPLATE property set to \"about\".  This page/post will be
  exported with the \"about-template\".:
  * A post or page
  :TEMPLATE: about

  * My templates :noexport:
  ,,#+name: about-template
  ,,#+begin_src html
    (-as-> (org-property-values "TEMPLATE") templates
           (append '("post-template" "page-template" "tag-template"))
           (--keep it templates)
           (--map `(,it . ,(org-babel-find-named-block it)) templates)
           (--filter (cdr it) templates)
           (--map `(,(car it) . ,(progn (goto-char (cdr it)) (org-babel-expand-noweb-references))) templates)
           (setq isamert/blog-templates templates))))

;; Page/post utils

(defun isamert/blog-url-case (str)
  (->> (downcase str)
       (s-replace "+" "p")
       (s-replace "(" "-")
       (s-replace ")" "-")
       (s-replace "=" "-")
       (s-replace ":" "-")
       (s-replace "." "-")
       (s-replace " " "-")
       (s-replace "---" "-")
       (s-replace "--" "-")
       (s-replace "'" "")
       (s-replace "," "")
       (s-replace "/" "-")
       (s-replace "ö" "o")
       (s-replace "ı" "i")
       (s-replace "ğ" "g")
       (s-replace "ü" "u")
       (s-replace "ş" "s")
       (s-replace "ö" "o")
       (s-replace "ç" "c")))

(defun isamert/blog-mk-path (&optional with-publish-dir)
  (let* ((title (org-entry-get nil "ITEM"))
         (publish-date (org-entry-get nil "PUBLISH_DATE"))
         (custom-id (org-entry-get nil "CUSTOM_ID"))
         (export-path (org-entry-get nil "EXPORT_AS"))
         (result (if export-path
                     (format "%s.html" export-path)
                   (format "%s/%s.html"
                           (s-join "/" (isamert/parse-org-date publish-date))
    (if with-publish-dir
        (format "%s/%s" isamert/blog-publish-path result)

(defun isamert/parse-org-date (date)
    (when (string-match "\\([0-9]\\{4\\}\\)-\\([0-9]\\{2\\}\\)-\\([0-9]\\{2\\}\\)" date)
      `(,(match-string 1 date) ,(match-string 2 date) ,(match-string 3 date)))))

;; General utils

(defun isamert/read-file-to-string (file-path)
  "Return FILE-PATH's file content."
    (insert-file-contents file-path)

(defun isamert/write-string-to-file (file string)
  (make-directory (file-name-directory file) t)
  (with-temp-file file
    (insert string)))

(defun isamert/plist-to-alist (args)
  (--map `(,(car it) . ,(cadr it)) (-partition-all 2 args)))

(defun isamert/org-get-keyword (keyword)
  "Used to get KEYWORDs like \"AUTHOR\" \"TITLE\" etc.  They can be overridden in header properties."
  (-if-let (local-val (org-entry-get nil keyword))
      (cadr (assoc keyword (org-collect-keywords `(,keyword))))))

;; RSS

(defun isamert/blog-generate-rss ()
  (let* ((taggeds (make-hash-table :test 'equal))
         (items (->> (org-map-entries 'isamert/blog-generate-rss-item-current "LEVEL=1")
                     (--keep it))))
    (-each items
      (-lambda ((tags . item))
        (--each tags (push item (gethash it taggeds '())))))

     (format "%s/%s/main.xml"
     (isamert/blog-make-rss-doc (-map 'cdr items)))

    (--each (hash-table-keys taggeds)
       (format "%s/%s/%s.xml"
       (isamert/blog-make-rss-doc (gethash it taggeds) it)))))

(defun isamert/blog-generate-rss-item-current ()
  "Return the list of tags and RSS item for current post."
  (when (isamert/blog-can-export-current)
    `(,(--filter (not (s-blank? it)) (s-split ":" (org-entry-get nil "TAGS")))
      <description><![CDATA[ %s ]]></description>
        (org-entry-get nil "ITEM")
        (format-time-string "%a, %d %b %Y %H:%M:%S %z" (encode-time (org-parse-time-string (org-entry-get nil "PUBLISH_DATE"))))
        (or (org-entry-get nil "ABSTRACT") "")))))

(defun isamert/blog-make-rss-doc (items &optional tag)
  (format "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>
<rss version=\"2.0\">
   (if tag
       (format "%s - %s" tag isamert/blog-title)
   (--reduce (format "%s%s" it acc) items)))

(defun isamert/blog-generate-tag-pages ()
  (let* ((taggeds (make-hash-table :test 'equal))
         (items (->> (org-map-entries 'isamert/blog-generate-tag-info-current "LEVEL=1")
                     (--keep it))))
    (-each items
      (-lambda ((tags . item))
        (--each tags (push item (gethash it taggeds '())))))

    (--each (hash-table-keys taggeds)
       (format "%s/%s/%s.html"
       (isamert/template "tag-template"
                         :title (format "Posts tagged with %s" it)
                         :tag-name it
                         :posts `',(gethash it taggeds))))))

;; Sitemap

(defun isamert/blog-generate-sitemap ()
    (lambda ()
      (when (isamert/blog-can-export-current)
        (format "
                (s-join "-" (isamert/parse-org-date (or (org-entry-get nil "MODIFY_DATE") (org-entry-get nil "PUBLISH_DATE"))))))
      ) "LEVEL=1")
   (--keep it)
   (--reduce (format "%s%s" acc it))
   (format "<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<urlset xmlns=\"\">%s
   (isamert/write-string-to-file (format "%s/sitemap.xml" isamert/blog-publish-path))))

;; Development utils

(defun isamert/blog-start-local-server ()
  "Start the local server."
  (let ((default-directory (expand-file-name isamert/blog-publish-path)))
    (setq isamert/blog-server-process
          (start-process "isamert/blog-server-process" nil "python" "-m" "http.server" (format "%s" isamert/blog-local-port)))
    (browse-url (format "http://localhost:%s" isamert/blog-local-port))))

(defun isamert/blog-stop-local-server ()
  "Stop the local server."
  (when isamert/blog-server-process
    (delete-process isamert/blog-server-process)
    (setq isamert/blog-server-process nil)))

(defun isamert/blog-restart-local-server ()
  "Restart the local server."

(defun isamert/blog-open-current ()
  "Open the current headers exported file in browser."
  (if isamert/blog-server-process
       (format "http://localhost:3000/%s"
     (isamert/blog-mk-path t))))

;; Template functions

(defun isamert/org-date-to-iso (org-date)
  "Convert given ORG-DATE (like [2021-03-28 Sun]) to ISO date (2021-03-28)"
  (s-replace-regexp "[^0-9-]" "" org-date))

(defun isamert/create-tag-list (tags)
  (->> (s-split ":" tags)
    (--filter (not (s-blank? it)))
    (--map (format "<a href=\"/tags/%s.html\" class=\"tag-link\">%s</a>" it it))
    (s-join ", ")))

(defun isamert/blog-all-tags ()
  "Return all tag names that has been used in the blog."
  (--remove (-contains? '("noexport" "page") it) (-flatten (org-get-buffer-tags))))

Load blog code

  • The following thing automatically loads the code above when this file is opened. This basically makes use of file local variables.








No releases published


No packages published