New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: kernel NULL pointer dereference in zap_lockdir #11804
Comments
This happened to me this morning on a recently built Promox VE 7 server running sanoid/syncoid every minute. FreeNAS Mini server which uses an ASRock Rack C2750D4I. I've had two of these servers running on FreeBSD 12.2 for years w/o issue. Just recently converted them over to Proxmox/ZFS on Linux. Almost all zfs/zpool commands froze completely (ctrl-c did not work) though I was able to run a
This was a freshly built pool as of a few weeks ago. The only thing I've done to it since I built it was to swap out what I found out to be an SMR drive (all drives are now CMR). As you can see, the resilver completed successfully. No errors found. The filesystems could still be accessed, though accessing a snapshot directory hung the When I ran I could find no other errors in any of the logs.
|
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
Hi, I seem to have hit a similar issue while doing send. The system is ubuntu 22.04 with stock ZFS (2.1.4-0ubuntu0.1), encryption is enabled.
Four days ago I upgraded from ubuntu 20.04 with stock ZFS 0.8 which never had any issue in more than 2 years of heavy usage (snapshots of VM zvol devices sent to backup server every 15 minutes). After the upgrade, I did a "zpool upgrade -a". Everything seemed OK until this morning where I found the machine stuck while doing a regular "send". |
Additional information : for two days prior to the crash, syncoid was failing to send snapshots of a particular path, more precisely the receiver was closing because of "invalid backup stream":
Indeed, "zpool status" was reporting one snapshot of this path to be corrupted. I have since then started a scrub and while "zpool status" still reports a permanent error, it's now on an anonymous file and not anymore linked to the snapshot, furthermore syncoid does not fail anymore. Hopefully the interrupted "send" was the trigger of the kernel crash and I won't get it anymore anytime soon. |
Encountered this today during a raw encrypted zfs send/recv using syncoid. File operations still work, but zpool/zfs commands are unresponsive. Debian bullseye, using ZFS 2.1.9 and Linux kernel 6.0.12 from backports
|
One part of the web 3.0 is to be able to annotate and share comments on the web. This article is my best try to find a nice open source privacy friendly tool. Spoiler: there aren't any :P The alternative I'm using so far is to process the data at the same time as I underline it. - At the mobile/tablet you can split your screen and have Orgzly on one tab and the browser in the other. So that underlining, copy and paste doesn't break too much the workflow. - At the eBook I underline it and post process it after. The idea of using an underlining tool makes sense in the case to post process the content in a more efficient environment such as a laptop. The use of Orgzly is kind of a preprocessing. If the underlining software can easily export the highlighted content along with the link to the source then it would be much quicker The advantage of using Orgzly is also that it works today both online and offline and it is more privacy friendly. On the post I review some of the existent solutions feat(ansible_snippets#Avoid arbitrary disk mount): Avoid arbitrary disk mount Instead of using `/dev/sda` use `/dev/disk/by-id/whatever` feat(ansible_snippets#Get the user running ansible in the host ): Get the user running ansible in the host If you `gather_facts` use the `ansible_user_id` variable. feat(antiracism): Recomendar el episodio de podcast el diario de Jadiya [Diario de Jadiya](https://deesonosehabla.com/episodios/episodio-2-jadiya/) ([link al archivo](https://dts.podtrac.com/redirect.mp3/dovetail.prxu.org/302/7fa33dd2-3f29-48f5-ad96-f6874909d9fb/Master_ep.2_Jadiya.mp3)): es algo que todo xenófobo debería de escuchar, es un diario de una chavala saharaui que formó parte del programa de veranos en familias españolas. feat(bash_snippets#Self delete shell script): Self delete shell script Add at the end of the script ```bash rm -- "$0" ``` `$0` is a magic variable for the full path of the executed script. feat(bash_snippets#Add a user to the sudoers through command line ): Add a user to the sudoers through command line Add the user to the sudo group: ```bash sudo usermod -a -G sudo <username> ``` The change will take effect the next time the user logs in. This works because `/etc/sudoers` is pre-configured to grant permissions to all members of this group (You should not have to make any changes to this): ```bash %sudo ALL=(ALL:ALL) ALL ``` feat(bash_snippets#Error management done well in bash): Error management done well in bash If you wish to capture error management in bash you can use the next format ```bash if ( ! echo "$EMAIL" >> "$USER_TOTP_FILE" ) then echo "** Error: could not associate email for user $USERNAME" exit 1 fi ``` feat(bats): Introduce bats Bash Automated Testing System is a TAP-compliant testing framework for Bash 3.2 or above. It provides a simple way to verify that the UNIX programs you write behave as expected. A Bats test file is a Bash script with special syntax for defining test cases. Under the hood, each test case is just a function with a description. ```bash @test "addition using bc" { result="$(echo 2+2 | bc)" [ "$result" -eq 4 ] } @test "addition using dc" { result="$(echo 2 2+p | dc)" [ "$result" -eq 4 ] } ``` Bats is most useful when testing software written in Bash, but you can use it to test any UNIX program. References: - [Source](https://github.com/bats-core/bats-core) - [Docs](https://bats-core.readthedocs.io/) feat(calendar_management#Calendar event notification system): Add calendar event notification system tool Set up a system that notifies you when the next calendar event is about to start to avoid spending mental load on it and to reduce the possibilities of missing the event. I've created a small tool that: - Tells me the number of [pomodoros](task_tools.md#pomodoro) that I have until the next event. - Once a pomodoro finishes it makes me focus on the amount left so that I can prepare for the event - Catches my attention when the event is starting. feat(python_snippets#Fix variable is unbound pyright error): Fix variable is unbound pyright error You may receive these warnings if you set variables inside if or try/except blocks such as the next one: ```python def x(): y = True if y: a = 1 print(a) # "a" is possibly unbound ``` The easy fix is to set `a = None` outside those blocks ```python def x(): a = None y = True if y: a = 1 print(a) # "a" is possibly unbound ``` feat(detox): Introduce detox detox cleans up filenames from the command line. Installation: ```bash apt-get install detox ``` Usage: ```bash detox * ``` feat(aws#Get the role used by the instance): Get the role used by the instance ```bash aws sts get-caller-identity { "UserId": "AIDAxxx", "Account": "xxx", "Arn": "arn:aws:iam::xxx:user/Tyrone321" } ``` You can then take the role name, and query IAM for the role details using both `iam list-role-policies` for inline policies and `iam-list-attached-role-policies` for attached managed policies (thanks to @Dimitry K for the callout). $ aws iam list-attached-role-policies --role-name Tyrone321 { "AttachedPolicies": [ { "PolicyName": "SomePolicy", "PolicyArn": "arn:aws:iam::aws:policy/xxx" }, { "PolicyName": "AnotherPolicy", "PolicyArn": "arn:aws:iam::aws:policy/xxx" } ] } To get the actual IAM permissions, use aws iam get-policy to get the default policy version ID, and then aws iam get-policy-version with the version ID to retrieve the actual policy statements. If the IAM principal is a user, the commands are aws iam list-attached-user-policies and aws iam get-user-policy. feat(kubectl#namespaces): Improve the way to manage kubernetes namespaces Temporary set the namespace for a request: ```bash kubectl -n {{ namespace_name }} {{ command_to_execute }} kubectl --namespace={{ namespace_name }} {{ command_to_execute }} ``` Permanently set the namespace for a request: ```bash kubectl config set-context --current --namespace={{ namespace_name }} ``` To make things easier you can set an alias: ```bash alias kn='kubectl config set-context --current --namespace ' ``` To unset the namespace use `kubectl config set-context --current --namespace=""` fix(kubernetes_jobs#the-new-way): Improve the Cronjob monitorization expression ```yaml - alert: CronJobStatusFailed expr: kube_cronjob_status_last_successful_time{exported_namespace!=""} - kube_cronjob_status_last_schedule_time < 0 for: 5m annotations: description: | '{{ $labels.cronjob }} at {{ $labels.exported_namespace }} namespace last run hasn't been successful for {{ value }} seconds.' ``` feat(digital_garden#link-rot): Manage link rot Link rot occurs when hyperlinks become obsolete or broken, leading to content loss or diminished user experience. Here are some ways to mitigate link rot in digital gardens: - Use Permalinks: Ensure that your digital garden software supports permanent URLs (permalinks) for each note or idea. Permalinks make it easier to reference and maintain links over time because they remain stable even if the underlying content changes. This is uncomfortable to do unless your editor supports it transparently. - Regularly Update Links: You can check for broken or outdated links and replacing them with current references through by using automated link checkers. - Implement Redirects: When restructuring your digital garden or moving content to different locations, set up redirects for old URLs to ensure that visitors are directed to the new location. This prevents link rot and maintains the continuity of your digital garden. I don't do it as I haven't found a way to automatically doing this. - [Archive External Content](#archive-external-content): When linking to external websites or resources, consider using web archiving services to create snapshots or archives of the content. This ensures that even if the original content becomes unavailable, visitors can still access archived versions. Check [the section below](#archive-external-content) for more information feat(docker#Using the json driver): Monitor logs with json driver This is the cleanest way to do it in my opinion. First configure `docker` to output the logs as json by adding to `/etc/docker/daemon.json`: ```json { "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "3" } } ``` Then use [promtail's `docker_sd_configs`](promtail.md#scrape-docker-logs). feat(wireguard#installation): Introduce wireguard installation WireGuard is available from the default repositories. To install it, run the following commands: ```bash sudo apt install wireguard ``` The `wg` and `wg-quick` command-line tools allow you to configure and manage the WireGuard interfaces. Each device in the WireGuard VPN network needs to have a private and public key. Run the following command to generate the key pair: ```bash wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey ``` The files will be generated in the `/etc/wireguard` directory. Wireguard also supports a pre-shared key, which adds an additional layer of symmetric-key cryptography. This key is optional and must be unique for each peer pair. The next step is to configure the tunnel device that will route the VPN traffic. The device can be set up either from the command line using the `ip` and `wg` commands, or by creating the configuration file with a text editor. Create a new file named `wg0.conf` and add the following contents: ```bash sudo nano /etc/wireguard/wg0.conf ``` ```ini [Interface] Address = 10.0.0.1/24 SaveConfig = true ListenPort = 51820 PrivateKey = SERVER_PRIVATE_KEY PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE ``` The interface can be named anything, however it is recommended to use something like `wg0` or `wgvpn0`. The settings in the interface section have the following meaning: - Address: A comma-separated list of v4 or v6 IP addresses for the wg0 interface. Use IPs from a range that is reserved for private networks (10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16). - ListenPort - The listening port. - PrivateKey - A private key generated by the `wg genkey` command. (To see the contents of the file type: `sudo cat /etc/wireguard/privatekey`) - SaveConfig - When set to true, the current state of the interface is saved to the configuration file when shutdown. - PostUp - Command or script that is executed before bringing the interface up. In this example, we’re using iptables to enable masquerading. This allows traffic to leave the server, giving the VPN clients access to the Internet. Make sure to replace `ens3` after `-A POSTROUTING` to match the name of your public network interface. You can easily find the interface with: ```bash ip -o -4 route show to default | awk '{print $5}' ``` - PostDown - command or script which is executed before bringing the interface down. The iptables rules will be removed once the interface is down. The `wg0.conf` and `privatekey` files should not be readable to normal users. Use `chmod` to set the permissions to `600`: ```bash sudo chmod 600 /etc/wireguard/{privatekey,wg0.conf} ``` Once done, bring the `wg0` interface up using the attributes specified in the configuration file: ```bash sudo wg-quick up wg0 ``` The command will produce an output similar to the following: ```bash [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add 10.0.0.1/24 dev wg0 [#] ip link set mtu 1420 up dev wg0 [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE ``` To check the interface state and configuration, enter: ```bash sudo wg show wg0 interface: wg0 public key: r3imyh3MCYggaZACmkx+CxlD6uAmICI8pe/PGq8+qCg= private key: (hidden) ``` You can also run `ip` a show `wg0` to verify the interface state: ```bash ip a show wg0 4: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.0.0.1/24 scope global wg0 valid_lft forever preferred_lft forever ``` WireGuard can also be managed with Systemd. To bring the WireGuard interface at boot time, run the following command: ```bash sudo systemctl enable wg-quick@wg0 ``` IP forwarding must be enabled for NAT to work. Open the `/etc/sysctl.conf` file and add or uncomment the following line: ```bash sudo vi /etc/sysctl.conf ``` ```ini net.ipv4.ip_forward=1 ``` Save the file and apply the change: ```bash sudo sysctl -p net.ipv4.ip_forward = 1 ``` If you are using UFW to manage your firewall you need to open UDP traffic on port 51820: ```bash sudo ufw allow 51820/udp ``` Also install `wireguard` in your clients. The process for setting up a client is pretty much the same as you did for the server. If the client is on Android, [the official app](https://www.wireguard.com/install/) is not on F-droid, but you can get it through the Aurora store First generate the public and private keys: ```bash wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey ``` Create the file `wg0.conf` and add the following contents: ```bash sudo vi /etc/wireguard/wg0.conf ``` ```ini [Interface] PrivateKey = CLIENT_PRIVATE_KEY Address = 10.0.0.2/24 [Peer] PublicKey = SERVER_PUBLIC_KEY Endpoint = SERVER_IP_ADDRESS:51820 AllowedIPs = 0.0.0.0/0 ``` The settings in the interface section have the same meaning as when setting up the server: - Address: A comma-separated list of v4 or v6 IP addresses for the `wg0` interface. - PrivateKey: To see the contents of the file on the client machine run: `sudo cat /etc/wireguard/privatekey` The peer section contains the following fields: - PublicKey: A public key of the peer you want to connect to. (The contents of the server’s `/etc/wireguard/publickey` file). - Endpoint: An IP or hostname of the peer you want to connect to followed by a colon, and then a port number on which the remote peer listens to. - AllowedIPs: A comma-separated list of v4 or v6 IP addresses from which incoming traffic for the peer is allowed and to which outgoing traffic for this peer is directed. We’re using 0.0.0.0/0 because we are routing the traffic and want the server peer to send packets with any source IP. If you need to configure additional clients, just repeat the same steps using a different private IP address. The last step is to add the client’s public key and IP address to the server. To do that, run the following command on the Ubuntu server: ```bash sudo wg set wg0 peer CLIENT_PUBLIC_KEY allowed-ips 10.0.0.2 ``` Make sure to change the `CLIENT_PUBLIC_KEY` with the public key you generated on the client machine (`sudo cat /etc/wireguard/publickey`) and adjust the client IP address if it is different. Once done, go back to the client machine and bring up the tunneling interface. ```bash sudo wg-quick up wg0 ``` Now you should be connected to the Ubuntu server, and the traffic from your client machine should be routed through it. You can check the connection with: ```bash sudo wg interface: wg0 public key: gFeK6A16ncnT1FG6fJhOCMPMeY4hZa97cZCNWis7cSo= private key: (hidden) listening port: 53527 fwmark: 0xca6c peer: r3imyh3MCYggaZACmkx+CxlD6uAmICI8pe/PGq8+qCg= endpoint: XXX.XXX.XXX.XXX:51820 allowed ips: 0.0.0.0/0 latest handshake: 53 seconds ago transfer: 3.23 KiB received, 3.50 KiB sent ``` You can also open your browser, type “what is my ip”, and you should see your server IP address. To stop the tunneling, bring down the wg0 interface: ```bash sudo wg-quick down wg0 ``` feat(wireguard#Allow the access to the local network): Allow the access to the local network If you want to let the peer access a server of your local network you could add it to the `allowed-ips`. ```bash sudo wg set wg0 peer CLIENT_PUBLIC_KEY allowed-ips 10.0.0.2,192.168.3.123/32 ``` Then you need to add the routes: ```bash ip route add 192.168.3.123 dev wg0 ``` feat(wireguard#Remove a peer): Remove a peer ```bash wg show (find the peer, note the interface and peer key) wg set <interface> peer <key> remove ``` feat(zfs#Troubleshooting): Troubleshooting general guidelines To debug ZFS errors you can check: - The generic kernel logs: `dmesg -T`, `/var/log/syslog` or where kernel log messages are sent. - ZFS Kernel Module Debug Messages: The ZFS kernel modules use an internal log buffer for detailed logging information. This log information is available in the pseudo file `/proc/spl/kstat/zfs/dbgmsg` for ZFS builds where ZFS module parameter `zfs_dbgmsg_enable = 1` feat(zfs#ZFS pool is stuck): ZFS pool is stuck troubleshoot Symptom: zfs or zpool command appear hung, does not return, and is not killable Likely cause: kernel thread hung or panic If a kernel thread is stuck, then a backtrace of the stuck thread can be in the logs. In some cases, the stuck thread is not logged until the deadman timer expires. The only way I've yet found to solve this is rebooting the machine (not ideal). I even have to use the magic keys -.- . feat(zfs#kernel NULL pointer dereference in zap_lockdir): kernel NULL pointer dereference in zap_lockdir troubleshoot There are many issues open with this behaviour: [1](https://github.com/openzfs/zfs/issues/11804), [2](https://github.com/openzfs/zfs/issues/6639) In my case I feel it happens when running `syncoid` to send the backups to the backup server. feat(linux_snippets#Make a file executable in a git repository ): Make a file executable in a git repository ```bash git add entrypoint.sh git update-index --chmod=+x entrypoint.sh ``` feat(linux_snippets#Configure autologin in Debian with Gnome): Configure autologin in Debian with Gnome Edit the `/etc/gdm3/daemon.conf` file and include: ```ini AutomaticLoginEnable = true AutomaticLogin = <your user> ``` feat(linux_snippets#See errors in the journalctl ): See errors in the journalctl To get all errors for running services using journalctl: ```bash journalctl -p 3 -xb ``` where `-p 3` means priority err, `-x` provides extra message information, and `-b` means since last boot. feat(linux_snippets#Fix rsyslog builtin:omfile suspended error): Fix rsyslog builtin:omfile suspended error It may be a permissions error. I have not been able to pinpoint the reason behind it. What did solve it though is to remove the [aledgely deprecated paramenters](https://www.rsyslog.com/doc/configuration/modules/omfile.html) from `/etc/rsyslog.conf`: ``` ``` I hope that as they are the default parameters, they don't need to be set. feat(loki#Configure alerts and rules): Configure alerts and rules Grafana Loki includes a component called the ruler. The ruler is responsible for continually evaluating a set of configurable queries and performing an action based on the result. This example configuration sources rules from a local disk. ```yaml ruler: storage: type: local local: directory: /tmp/rules rule_path: /tmp/scratch alertmanager_url: http://localhost ring: kvstore: store: inmemory enable_api: true ``` There are two kinds of rules: alerting rules and recording rules. Alerting rules allow you to define alert conditions based on LogQL expression language expressions and to send notifications about firing alerts to an external service. A complete example of a rules file: ```yaml groups: - name: should_fire rules: - alert: HighPercentageError expr: | sum(rate({app="foo", env="production"} |= "error" [5m])) by (job) / sum(rate({app="foo", env="production"}[5m])) by (job) > 0.05 for: 10m labels: severity: page annotations: summary: High request latency - name: credentials_leak rules: - alert: http-credentials-leaked annotations: message: "{{ $labels.job }} is leaking http basic auth credentials." expr: 'sum by (cluster, job, pod) (count_over_time({namespace="prod"} |~ "http(s?)://(\\w+):(\\w+)@" [5m]) > 0)' for: 10m labels: severity: critical ``` Recording rules allow you to precompute frequently needed or computationally expensive expressions and save their result as a new set of time series. Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh. Loki allows you to run metric queries over your logs, which means that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log. ```yaml name: NginxRules interval: 1m rules: - record: nginx:requests:rate1m expr: | sum( rate({container="nginx"}[1m]) ) labels: cluster: "us-central1" ``` This query (`expr`) will be executed every 1 minute (`interval`), the result of which will be stored in the metric name we have defined (`record`). This metric named `nginx:requests:rate1m` can now be sent to Prometheus, where it will be stored just like any other metric. Here is an example remote-write configuration for sending to a local Prometheus instance: ```yaml ruler: ... other settings ... remote_write: enabled: true client: url: http://localhost:9090/api/v1/write ``` feat(matrix): Review matrix servers [Matrix](https://wiki.archlinux.org/index.php/Matrix) is a FLOSS protocol for open federated instant messaging. Matrix ecosystem consists of many servers which can be used for registration. Choose a server that doesn't engage in chaotic account or room purges. Being on such a homeserver is no different from being on Discord. If a homeserver has rules, read them to check if they're unreasonably strict. Keep an eye on the usual things that tend to stink. For example, if a homeserver is trying to suppress certain political opinions, restrict you from posting certain types of content or otherwise impose authoritarian environment. https://view.matrix.org/ gives an overview of the most used channels and on each of them you can see what do the people use. Looking at meaningful channels: - [Arch linux](https://view.matrix.org/room/!GtIfdsfQtQIgbQSxwJ:archlinux.org/servers) - GrapheneOs: [1](https://view.matrix.org/room/!lAoVmVifHHtoeOAmHO:grapheneos.org/servers), [2](https://view.matrix.org/room/!UVEsOAdphEMYhxzTah:grapheneos.org/servers) - [Techlore](https://view.matrix.org/room/!zjYxZkVEqwWcQQhXxc:techlore.net/servers) You can say that the most used (after matrix.org) are: - envs.net: They have [an element page](https://element.envs.net/#/welcome), I don't find any political statement - tchncs.de: Run by [an individual](https://tchncs.de/), I don't find any political statement either. - t2bot.io: [It's for bots](https://t2bot.io/) so nope. - nitro.chat: Run by [Nitrokey](https://www.nitrokey.com/about) is the world-leading company in open source security hardware. Nitrokey develops IT security hardware for data encryption, key management and user authentication, as well as secure network devices, PCs, laptops and smartphones. The company has been founded in Berlin, Germany in 2015 and can already count ten thousands of users from more than 120 countries including numerous well-known international enterprises from various industries. It lists Amazon, nvidia, Ford, Google between their clients -> Nooope! [Tatsumoto](https://tatsumoto-ren.github.io/blog/list-of-matrix-servers.html) doesn't recommend some of them saying that the admin blocked rooms in the pursuit of censorship, but he also references a page that looks pretty Zionist so I'm not sure rick... Maybe that censorship means they are good servers xD. feat(memoria_historica#Terrorismo de estado en Euskadi): Recomendar el documental Carpetas azules Sobre el terrorismo de estado en Euskadi feat(orgmode#Reload the agenda con any file change): Reload the agenda con any file change There are two ways of doing this: - Reload the agenda each time you save a document - Reload the agenda each X seconds Reload the agenda each time you save a document: Add this to your configuration: ```lua vim.api.nvim_create_autocmd('BufWritePost', { pattern = '*.org', callback = function() local bufnr = vim.fn.bufnr('orgagenda') or -1 if bufnr > -1 then require('orgmode').agenda:redo() end end }) ``` This will reload agenda window if it's open each time you write any org file, it won't work if you archive without saving though yet. But that can be easily fixed if you use [the auto-save plugin](vim_autosave.md). Reload the agenda each X seconds: Add this to your configuration: ```lua vim.api.nvim_create_autocmd("FileType", { pattern = "org", group = vim.api.nvim_create_augroup("orgmode", { clear = true }), callback = function() -- Reload the agenda each second if its opened so that unsaved changes -- in the files are shown local timer = vim.loop.new_timer() timer:start( 0, 10000, vim.schedule_wrap(function() local bufnr = vim.fn.bufnr("orgagenda") or -1 if bufnr > -1 then require("orgmode").agenda:redo(true) end end) ) end, }) ``` feat(orgmode#links): Introduce the use of links Orgmode supports the insertion of links with the `org_insert_link` and `org_store_link` commands. I've changed the default `<leader>oli` and `<leader>ols` bindings to some quicker ones: ```lua mappings = { org = { -- link management org_insert_link = "<leader>l", org_store_link = "<leader>ls", }, } ``` There are the next possible workflows: - Discover links as you go: If you more less know in which file are the headings you want to link: - Start the link helper with `<leader>l` - Type `file:./` and press `<tab>`, this will show you the available files. Select the one you want - Then type `::*` and press `<tab>` again to get the list of available headings. - Store the links you want to paste: - Go to the heading you want to link - Press `<leader>ls` to store the link - Go to the place where you want to paste the link - Press `<leader>l` and then `<tab>` to iterate over the saved links. feat(orgmode#Convert source code in the fly from markdown to orgmode): Convert source code in the fly from markdown to orgmode It would be awesome that when you do `nvim myfile.md` it automatically converts it to orgmode so that you can use all the power of it and once you save the file it converts it back to markdown I've started playing around with this but got nowhere. I leave you my breadcrumbs in case you want to follow this path. ```lua -- Load the markdown documents as orgmode documents vim.api.nvim_create_autocmd("BufReadPost", { pattern = "*.md", callback = function() local markdown_file = vim.fn.expand("%:p") local org_content = vim.fn.system("pandoc -f gfm -t org " .. markdown_file) vim.cmd("%delete") vim.api.nvim_put(vim.fn.split(org_content, "\n"), "l", false, true) vim.bo.filetype = "org" end, }) ``` If you make it work please [tell me how you did it!](contact.md) feat(postgres#Operations): Postgres Operations [Restore a dump](https://www.postgresql.org/docs/current/backup-dump.html#BACKUP-DUMP-RESTORE) Text files created by `pg_dump` are intended to be read in by the `psql` program. The general command form to restore a dump is ```bash psql dbname < dumpfile ``` Where `dumpfile` is the file output by the `pg_dump` command. The database `dbname` will not be created by this command, so you must create it yourself from `template0` before executing `psql` (e.g., with `createdb -T template0 dbname`). `psql` supports options similar to `pg_dump` for specifying the database server to connect to and the user name to use. See [the `psql` reference page](https://www.postgresql.org/docs/current/app-psql.html) for more information. Non-text file dumps are restored using the `pg_restore` utility. feat(postgres#Fix pg_dump version mismatch): Fix pg_dump version mismatch If you need to use a `pg_dump` version different from the one you have at your system you could either [use nix](nix.md) or use docker ```bash docker run postgres:9.2 pg_dump books > books.out ``` Or if you need to enter the password ```bash docker run -v /path/to/dump:/dump -it postgres:12 bash pg_dump books > /dump/books.out ``` fix(promtail): Improve docker log parsing to get a clean container name ```yaml scrape_configs: - job_name: docker docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 5s relabel_configs: - source_labels: ['__meta_docker_container_name'] regex: '/(.*)' target_label: 'container' pipeline_stages: - static_labels: job: docker ``` fix(promtail): Improve journalctl log parsing to avoid duplicate logs If you've set some systemd services that run docker-compose it's a good idea not to ingest them with promtail so as not to have duplicate log lines: ```yaml scrape_configs: - job_name: journal journal: json: false max_age: 12h path: /var/log/journal labels: job: systemd-journal relabel_configs: - source_labels: ['__journal__systemd_unit'] target_label: unit - source_labels: ['__journal__hostname'] target_label: hostname - source_labels: ['__journal_syslog_identifier'] target_label: syslog_identifier - source_labels: ['__journal_transport'] target_label: transport - source_labels: ['__journal_priority_keyword'] target_label: level pipeline_stages: - drop: source: syslog_identifier value: docker-compose ``` feat(roadmap_adjustment): Introduce roadmap adjustment Roadmap adjustment gathers the techniques to make and review plans in order to define the optimal path in terms of efficacy and efficiency. Roadmap adjustment can be categorized by the next approaches: - [Adjustment type](roadmap_adjustment.md#roadmap-adjustment-types) - [Abstraction level](roadmap_adjustment.md#Roadmap-adjustments-by-abstraction-level) - [Purpose](roadmap_adjustment.md#roadmap-adjustments-by-purpose) Before you dive in here are some warnings - Build your own proceses: Each of the adjustments defined below describe my curated process developed over the years, you can use them as a starting point to define what works for you or to get some ideas. Each of us is different and want to spend on this a different amount of time. - Keep them simple: It's important for the proceses to be light enough that you want to actually do them, so you see it as a help instead of a burden. It's always better to do a small and quick ones rather than nothing at all. At the start of the process analyze yourself to assess how much energy and time do you have and decide which steps of the guides below you want to follow. - Alive proceses: These adjustments have to reflect ourselves and our environment. As we change continuously, so will our adjustment proceses. I've gone for full blown adjustments of locking myself up for a week to not doing any for months. And that is just fine, these tools are there to help us only if we want to use them. - Heavily orgmode oriented: This article heavily uses [orgmode](orgmode.md), my currently chosen [task tool](task_tools.md), but that doesn't mean that the concepts can be applicable to other tools. feat(roadmap_adjustment#Roadmap adjustment types): Roadmap adjustment by types There are three types of roadmap adjustment if we split them by process type: - [Refinements](roadmap_adjustment.md#refinements): Clean up your system to represent the reality - [Reviews](roadmap_adjustment.md#reviews): Gather insights about your environment and yourself - [Plannings](roadmap_adjustment.md#plannings): Update your roadmap given the changes in your life feat(roadmap_adjustment#Refinements): Refinements The real trick to ensuring the trustworthiness of the whole time management system lies in regularly refreshing your thinking and your system from a more elevated perspective. That's impossible to do if your lists fall too far behind your reality. A good way to update the system is through periodic refinements. At some point you may feel the need to clarify the larger outcomes, the long-term goals, the visions and principles that ultimately drive your decisions. I'd advice against taking this step until you can keep your everyday world under control, otherwise you may undermine your motivation and energy rather than enhance them. Once you feel that you have an abstraction level under control jump to the next. Keep in mind that abstraction conquests are not permanent, life may break havoc and make you loose control of the lower levels, it's good then to step down and tidy things up even if it means disregarding higher abstractions. Sometimes refinements can be empowering, but they always are time and energy consuming, that's why we need to define well their purpose so that we can get the sweet spot of benefits against efforts invested. We need a process to review each level of abstraction: - [Step refinement](roadmap_adjustment.md#step-refinement) - [Task refinement](roadmap_adjustment.md#task-refinement) - [Project refinement](roadmap_adjustment.md#project-refinement) - [Area refinement](roadmap_adjustment.md#area-refinement) feat(roadmap_adjustment#Reviews): Reviews Reviews are proceses to stop your daily life and do introspections to gather insights about your environment and yourself to better build an efficient and effective roadmap. Reviews can be done at different levels of purpose, each level gives you different benefits. - [Month review](roadmap_adjustment.md#month-review) - [Trimester review](roadmap_adjustment.m#trimester-review) - [Year review](roadmap_adjustment.m#year-review) Reviews guidelines: - Review approaches: In the past I used the [life logging](life_logging.md) tools to analyze the past in order to understand what I achieved and take it as a base to learn from my mistakes. It was useful when I needed the endorphines boost of seeing all the progress done. Once I assumed that progress speed and understood that we always do the best we can given how we are, I started to feel that the review process was too cumbersome and that it was holding me into the past. Nowadays I try not to look back but forward, analyze the present: how I feel, how's the environment around me, and how can I tweak both to fulfill my life goals. This approach leads to less reviewing of achievements and logs and more introspection, thinking and imagining. Which although may be slower to correct mistakes of the past, will surely make you live closer to the utopy. The reviews below then follow that second approach. - Reviews as deadlines: Reviews can also be used as deadlines. Sometimes deadlines helps us get motivation and energy to achieve what we want if we feel low. But remember not to push yourself too hard. If deadlines do you more wrong than right, don't use them. All these tools are meant to help us, not to bring us down. feat(roadmap_adjustment#Plannings): Plannings Life planning can be done at different levels. All of them help you in different ways to reduce the mental load, each also gives you extra benefits that can't be gained by the others. Going from lowest to highest abstraction level we have: - [Day plan](roadmap_adjustment.md#make-a-day-plan). - Week plan. - Month plan. - Trimester plan. - Year plan. feat(roadmap_adjustment#Step refinement): Step refinement The purpose is to make sure that the step description meets the next criteria: - It still represents what needs to be done. Sometimes it's something that is already done, or that the circumstances have changed in a way that we need to rephrase the step. - It's clear up to the point that you don't need to think anything to start working on it. It can be done: - When you create a new step. - Each time you read a step and feel that it doesn't meet the criteria. feat(roadmap_adjustment#Task refinement): Task refinement It fulfills these purposes: - Define the steps required to finish a task. - Make sure that the task still reflects a real need. - Make sure that there is always a refined next step to finish the task. - Clean up all the done elements than don't add value. - Ease the overwhelm feeling when faced with a daunting task. When done well, you'll better understand what you need to do, it will prevent you from wasting time at dead ends as you'll think before acting, and you'll develop the invaluable skill of breaking big problems into smaller ones. It can be done differently at different moments: - When you create a new task: - Decide what do you want to achieve when the task is finished. - Create a descriptive task title. - Analyze the possible ways to arrive to that outcome. Try to assess different solutions before choosing one. - Create a list of [refined steps](#step-refinement) for each of them. - When you finish a step, don't know how to go on so you need to look at the step lists and the next one is not refined enough: - Mark the done steps as done. - Do the [step refinement](#step-refinement) of the immediate next one. - When you're working on the task and feel that it needs an update: It can be because: - You've been working for a while on steps of a task that are not defined in the plan and feel that you've passed several bifurcations that you want to investigate and are afraid to forget them. For example imagine that your task plan looks like this: ```orgmode - [ ] Do A - [ ] Do B ``` But while working on A you've actually done: ```orgmode - [ ] Do A - [x] Do A.1 - [ ] Do A.2 - [x] Do A.2.1 - [ ] Do A.2.2 - [ ] Investigate A.2.3 - [ ] Investigate A.3 - [ ] Do B ``` If you find yourself doing 'Do A.2.2' but are afraid of loosing 'Investigate A.2.3' and 'Investigate A.3', go back to the task plan and update it to meet the current state. There is no need to fill in the things that you've done. Only the ones that you still want to do. - When you realize that the circumstances have changed enough that you need to update the task step list or title. - When you need to switch context to another task: this is specially necessary when you are going to stop working on the task. You never know when you're going to be able to work again on it, so it's crucial to at least refine the next step. It's also a good moment to do some [task cleaning](#task-cleaning). - When you read the title and need to take a look at the steps list to understand what is it about. Once you grasped the idea clarify the title. - When the task step list has so many done items that you need to search for the next actionable step [requires some cleaning](#task-cleaning). - When the task step list gets too complex: TBC The refinement precision needs to be incremental. It doesn't make sense to have a perfect plan because you often don't have all the information required to make it well, and you'll surely need to adapt it. All time spent refining steps that are going to be discarded in plan adaptations, is wasted time. feat(roadmap_adjustment#Task cleaning): Task cleaning Marking steps as done make help you get an idea of the evolution of the task. It can also be useful if you want to do some kind of reporting. On the other hand, having a long list of done steps (specially if you have many levels of step indentation may make the finding of the next actionable step difficult. It's a good idea then to often clean up all done items. If you don't care about the traceability of what you've done simply delete the done lines. If you do, until there is a more automated way: - Archive the todo element. - Undo the archive. - Clean up the done items. This way you have a snapshot of the state of the task in your archive. feat(roadmap_adjustment#Project refinement): Project refinement The purpose is to ensure that given the current circumstances: - The project description represents the reality and is clear enough. - The project roadmap defined by the task plan is the optimal path to reach the project outcome. We can do it in two ways: - [Rabbit hole refinement](#rabbit-hole-project-refinement) - [Think outside the box refinement](#think-outside-the-box-project-refinement) Rabbit hole project refinement: This kind of refinement allows you to dig deeper in whichever path you're heading to. It's mechanical and require a limit level of creativity. It's then perfect to apply when you just finished doing a project's task. - Read the task titles to make sure that they still make sense following the next guidelines: - If the task title doesn't give you enough information, read the task steps and then tweak the task title to make it clearer. - Mark done tasks as done and archive them. - If you need create new tasks with the minimal refinement to register your idea. - Change the order of the tasks to meet current priorities. - Do a [task refinement](#task-refinement) for the most imminent one. Think outside the box project refinement: [Rabbit hole project refinement](#rabbit-hole-project-refinement) is the best way to reach the destination you're heading to. It may not be the optimal one though. As you have you're head deep into the rabbit hole it's easy to miss better alternative paths to reach the project objective. It could be interesting to use techniques that help you discover these paths for example in a [weekly planning](#the-weekly-planning). feat(roadmap_adjustment#Area planning): Area planning The purpose is to ensure that the area roadmap is the optimal way to reach the area goal given the current circumstances. We do it by following the next steps: - Check the goals of the area - Think or write down what are the best ways to reach the goals without looking at the area's project or road map - Adjust the previous ideas after reviewing the current road map and the future area projects - Take the decision of what is the optimal way - Adjust the roadmap (at project level) accordingly. This can't be done on the frenzy of everyday as you're prone to fall into any rabbit hole you're headed to. This is the first refinement that needs it's own time and reflection. As projects don't change very often, it makes sense to do it as part of the [monthly planning](#the-monthly-planning). feat(roadmap_adjustment#Roadmap adjustments by purpose): Roadmap adjustments by purpose Given the level of control of your life you can do the next adjustments: - [Survive the day](roadmap_adjustment.md#survive-the-day) - [Survive the week](roadmap_adjustment.md#survive-the-week) - [Ride the month](roadmap_adjustment.md#ride-the-month) As you master a purpose level you will have more experience and tools to manage more efficiently your life at the same time that you have less stress and mental load due to the reduction of uncertainty. This new state in theory (if life lets you) will eventually give you the energy to jump to the next purpose levels. feat(roadmap_adjustment#Survive the day): Survive the day At this level you're with your eyes closed and only react when life throws stuff at you. You'll surely be surprised of what and how hard it hits you, so probably you won't be able to address them the best way. You just want it to stop. This adjustment level aims to let you handle those hits without missing the stuff you need to do. This adjustment is split in the next parts: - [Get used to work with simple tasks](roadmap_adjustment.md#get-used-to-work-with-simple-tasks) - [Make a day plan](roadmap_adjustment.md#make-a-day-plan) - [Follow the day plan](roadmap_adjustment.md#follow-the-day-plan) - [Control your inbox](roadmap_adjustment.md#control-your-inbox) Get used to work with simple tasks: We'll start building a system that helps us not to die in agony at life aggressions with the spare energy left. One way to do it is to choose the tools to manage your life. Start small, only trying to manage the [step](#step) and [task](#task) roadmap adjustments. [The simplest task manager](task_tools.md#the-simplest-task-manager) is a good start. Make a day plan: This plan defines at day level which tasks are you going to work on and schedules when are you going to address them. The goal is to survive the day. It's a good starting point if you forget to do tasks that need to be done in the day or if you miss appointments. It's interesting to make your plan at the start of the day. I follow the next steps: - Clarify the state of the world - Get an idea of what you need to do by checking and cleaning: - Calendar events. - Your org agenda of the day - For each element decide if it needs to be in the agenda and refile it to the chosen destination. - The last day's plan. - The month objectives if you have them. - How much uninterrupted time you have between calendar events. - Your mental and physical state. - Check if you can transition the `WAITING` tasks to `DOING` or `TODO`. - Write the objectives of the day To make it easy to follow I use a bash script that asks me to follow these steps. Follow the day plan: There are two tools that will help to follow the day plan: - [The calendar event notification system](calendar_management.md#calendar-event-notification-system) to avoid spending mental load tracking when the next appointment starts and to reduce the chances of missing it. - Periodic checks of the day plan: If you use the [pomodoro technique](task_tools.md#pomodoro), after each iteration check your day objectives and assess whether you're going to finish what you proposed yourself or if you need to tweak the task steps to do so. Control your inbox: The [Inbox](task_tools.md#inbox) is a nasty daemon that loves to get out of control. You need to develop your inbox cleaning skills and proceses up to the point that you're sure that the important stuff tracked where it should be tracked. So far aiming to have a 0 element inbox is unrealistic though, at least for me. feat(roadmap_adjustment#Survive the week): Survive the week At this level you're able to open your myopic eyes, so you start to guess what life throws at you. This may be enough to be able to gracefully handle some of the small stuff. The fast ones will still hit you though as you still don't have too much time or definition to react. This adjustment is whatever you need to do to get your head empty again and get oriented for the next 9 days. It's split in the next phases: - [Week plan](#week-plan) feat(roadmap_adjustment#Week plan): Week plan No matter how good our intentions or system may be, you're going to take in more opportunities than you can handle. The more efficient you become, the more ground you'll try to grasp. You're going to have to learn to say no faster, and to more things, in order to stay afloat and comfortable. Having some dedicated time in the week to at least get up to the project level of thinking goes a long way towards making that easier. The plan defines at a 9 day time scale which projects are you going to work on. It's the next roadmap level to address a group of tasks. The goal changes from surviving the day to start planning your life. It's a good starting point if you are comfortable working with the pomodoro, task and day plans, and want to start deciding where you're heading to. Make your plan at meaningful days both to make it more effective and to make it more difficult to skip it. Maybe you can do it at the start of the week. I personally do it on Thursdays because it's when I have more information about the weekend events and I have some free time. I follow the next steps: - Clean your agenda for the next 9 days: Refiling or rescheduling items as you need. If you are using your calendar well you shouldn't need to do any change, just load in your mind the things you are meant to do. - If you're already at the ride the month adjustment: - Refine your month objective plans. For each objective decide the tasks/projects to be worked on and refactor them in the roadmap section of the `todo.org`. When doing the plan try to minimize the number of tasks and calendar appointments so as not to get overwhelmed. It's better to eventually fall short on tasks, than never reaching your goal. To make it easy to follow I use a bash script that asks me to follow these steps. feat(roadmap_adjustment#Ride the month): Ride the month At this level you not only had time to polish your roadmap adjustment skills, but also had the chance the buy some glasses for your myopic eyes! The increase in definition and time to react to what life throws at you lets to now get almost no hits `\\ ٩( ᐛ )و //`. Now that you have stopped worrying for your integrity, you start to hear a little voice from within yourself that gives you reports from your body and brain about what worries you, what makes you happy, what makes you mad, ... Has it been yelling all this time? `(¬º-°)¬`. At this adjustment level we'll start using the next abstraction level, the [objectives] is whatever you need to do to get your head empty again and get oriented for the next 9 days. It's split in the next phases: Personal integrity review: The objectives of the personal integrity review are: - Identify how you feel and what worries you. - Identify strong and weak points on your systems. - Identify deadlines. The objectives aren't to: - Assess the progress in your objectives and decisions. Doing this adjustment once per month is a good frequency given the speed of life change and the efforts required to do it. It's interesting to do these reviews on meaningful days such as the last day of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day. As it's a process we're going to do very often, we need it to be relatively quick and easy so as not to invest too much time or energies on it. Keep in mind that this should be an analysis at month level in terms of abstraction, here is not the place to ask yourself if you're fulfilling your life goals. As such, you don't need that much time either, just identifying the top things that pop out of your mind are more than enough. Personal integrity review tools: With a new level of abstraction we need tools: - The *Review box*: It's the place you leave notes for yourself when you do the review, it can be for example a physical folder or a computer text file. I use a file called `review_box.org`. It's filled after the refile of review elements captured in the rest of my inboxes. - The *Month checks*: It's a list of elements you want to periodically check its evolution throughout time. It's useful to analyze the validity of theories or new processes. I use the heading `Month checks` in a file called `life_checks.org`. - The *Objective list*: It's a list of elements you want to focus your energies on. It should be easy to consult. I started with a list per month in a file called `objectives.org` and then migrated to the [life path document](#the-life-path-document). Personal integrity review phases: We'll divide the review process in these phases: - [Prepare](#survive-review-prepare) - [Discover](#survive-review-discover) - [Analyze](#survive-review-analyze) - [Decide](#survive-review-decide) To record the results of the review create the file `references/reviews/YYYY_MM.org`, where the month is the one that is ending with the following template: ```org * Discover * Analyze * Decide ``` Personal integrity review prepare: It's important that you prepare your environment for the review. You need to be present and fully focused on the process itself. To do so you can: - Make sure you don't get interrupted: - Check your task manager tools to make sure that you don't have anything urgent to address in the next hour. - Disable all notifications - Set your analysis environment: - Put on the music that helps you get *in the zone*. I found it meaningful to select the best new music I've discovered this month. - Get all the things you may need for the review: - The checklist that defines the process of your review (this document in my case). - Somewhere to write down the insights. - Your *Review box*. - Your *life path document*. - Remove from your environment everything else that may distract you - Close all windows in your laptop that you're not going to use Personal integrity review discover: Try not to, but if you think of decisions you want to make that address the elements you're discovering, write them down in the `Decide` section of your review document. There are different paths to discover actionable items: - Analyze what is in your mind: Take 10 minutes to answer to the next questions (you don't need to answer them all): - Where is your mind these days? - What did drain your energy or brought you down emotionally this last month? - What worries you right now? - What did help you most this last month? - What did you enjoy most this last month? Notice that we do not need to review our life logging tools (diary, task manager, ...) to answer these questions. This means that we're doing an analysis of what is in our minds right now, not throughout the month. It's flawed but as we do this analysis often, it's probably fine. We add more importance to the latest events in our life anyway. - Empty the elements you added to the `review box`. - Process your `Month checks`. For each of them: - Think of whether you've met the check. - If you need, add action elements in the `Discover` section of the review. - Process your `Month objectives`. For each of them: - Think of whether you've met the objective. - If you need, add action elements in the `Discover` section of the review. - If you won't need the objective in the next month, archive it. Personal integrity review analyze: Of all the identified elements we need to understand them better to be able to choose the right path to address them. These elements are usually representations of a state of our lives that we want to change. - For each of them if you can think of an immediate solution to address the element add it to the `Decide` section otherwise add them to the `Analyze`. - Order the elements in `Analyze` in order of priority Then allocate 20 minutes to think about them. Go from top to bottom transitioning once you feel it's analyzed enough. You may not have time to analyze all of them. That's fine. Answering the next questions may help you: - What defines the state we want to change? - What are the underlying forces in your life that made you reach that state? - To what state you want to transition to? - What is the easiest way to reach that destination? For the last question you can resort to: - Habits change - Projects change: start or stop doing a series of tasks. - Roadmap change Once you have analyzed an element copy all the decisions you've made in the `Decide` section of your review document. Personal integrity review decide: Once you have a clear definition of the current state, the new and how to reach it you need to process each of the decisions you've identified through the review process so that they are represented in your life management system, otherwise you won't arrive the desired state. To do so analyze what is the best way to process each of the elements you have written in the `Decide` section. It can be one or many of the following: - Identify hard deadlines: Add a warning days before the deadline to make sure you're reminded until it's done. - Create or tweak a habit - Tweak your project and tasks definitions - Create *checks* to make sure that they are not overseen. - Create objectives that will be checked in the next reviews (weekly and monthly). - Create [Anki](anki.md) cards to keep the idea in your mind. Finally: - Check the last month checks and complete this month ones. - Pat yourself in the shoulder as you've finished the review ^^. feat(time_management_abstraction_levels): Introduce the time management abstraction levels To be able to manage the complexity of the life roadmap we can use models for different levels of abstraction with different purposes. In increasing level of abstraction: - [Step](time_management_abstraction_levels.md#step) - [Task](time_management_abstraction_levels.md#task) - [Project](time_management_abstraction_levels.md#project) - [Area](time_management_abstraction_levels.md#area) - [Goal](time_management_abstraction_levels.md#goal) - [Vision](time_management_abstraction_levels.md#vision) - [Purpose and principles](time_management_abstraction_levels.md#purpose-and-principles) **Step** Is the smallest unit in our model, it's a clear representation of an action you need to do. It needs to fit a phrase and usually starts in a verb. The scope of the action has to be narrow enough so that you can follow it without investing thinking energies. In orgmode they are represented as checklists: ```orgmode - [ ] Go to the green grocery store ``` Sometimes is useful to add more context to the steps, you can use an indented list. For example: ```orgmode - [ ] Call dad - [2023-12-11] Tried but he doesn't pick up - [2023-12-12] He told me to call him tomorrow ``` This is useful when you update waiting tasks. There are cases where it's also interesting to record when you've completed a step, you can append the date at the end. ```orgmode - [x] Completed step [2023-12-12] ``` **Task** Model an action that is defined by a list of steps that need to be completed. It has two possible representations in orgmode: - TODO items with checklists: ```orgmode * TODO Refill the fridge - [ ] Check what's left in the fridge - [ ] Make a list of what you want to buy - [ ] Go to the green grocery store ``` - Nested steps checklists. You may realize that to make the list of what you want to buy you first want to think of what you want to eat. You could then: ```orgmode - [ ] Make a list of what you want to buy - [ ] Think what you want to eat - [ ] Write down the list ``` Nested lists can also be found inside todo items: ```orgmode * TODO Refill the fridge - [ ] Check what's left in the fridge - [ ] Make a list of what you want to buy - [ ] Think what you want to eat - [ ] Write down the list - [ ] Go to the green grocery store ``` This is fine as long as it's manageable, once you start seeing many levels of indentation is a great sign that you need to divide your task in different tasks. *Adding more context to the task* Sometimes a task title is not enough. You need to register more context to be able to deal with the task. In those cases we need the task to be represented as a todo element. Between the title and the step list we can add the description. ```orgmode * TODO Task title This is the description of the task to add more context - [ ] Step 1 - [ ] Step 2 ``` If you need to use a list in the context, add a Steps section below to avoid errors on the editor. ```orgmode * TODO Task title This is the description of the task to add more context: - Context 1 - Context 2 Steps: - [ ] Step 1 - [ ] Step 2 ``` *Preventing the closing of a task without reading the step list* If you manage your tasks from an agenda or only reading the task title, there may be cases where you feel that the task is done, but if you see the step list you may realize that there is still stuff to do. A measure that can prevent this case is to add a mark in the task title that suggest you to check the steps. For example: ```orgmode * TODO Task title (CHECK) - [ ] ... ``` This is specially useful on recurring tasks that have a defined workflow that needs to be followed, or on tasks that have a defined validation criteria. **Project** Model an action that gathers a list of tasks towards a common greater outcome. ```orgmode * TODO Guarantee you eat well this week ** TODO Plan what you want to eat - [ ] ... ** TODO Refill the fridge - [ ] ... ** TODO Batch cook for the week - [ ] ... ``` **Area** Model a group of projects and tasks that follow the same interest, roles or accountabilities. These are not things to finish but rather to use as criteria for analyzing, defining a specific aspect of your life and to prioritize the projects to reach a higher outcome. We'll use areas to maintain balance and sustainability on our responsibilities as we operate in the world. I use specific orgmode files with the next structure: ```orgmode Objectives: - [ ] ... * Area roadmap ... * Area backlog ... ``` To find them easily I add a section in the `index.org` of the documentation repository. For example: ```orgmode * Areas ** [[file:./happiness.org][Happiness]] *** Project 1 of happiness ** [[file:./activism.org][Activism]] ** [[file:./efficiency.org][Efficiency]] ** [[file:./work.org][Work]] ``` **Objective** An [objective] is an idea of the future or desired result that a person or a group of people envision, plan, and commit to achieve. **Strategy** [Strategy](strategy.md) is a general plan to achieve one or more long-term or overall objectives under conditions of uncertainty. They can be used to define the direction of the [areas](#area) **Tactic** A [tactic](https://en.wikipedia.org/wiki/Tactic_(method)) is a conceptual action or short series of actions with the aim of achieving a short-term goal. This action can be implemented as one or more specific tasks. **Life path** Models the evolution of the principle and objectives throughout time. It's the highest level of abstraction of my life management system so far, and probably will be refactored soon in other documents. The structure of the [orgmode](orgmode.md) document is as follows: ```orgmode * Life path ** {year} *** Principles of {season} {year} {Notes on the season} - Principle 1 - Principle 2 ... **** Objectives of {month} {year} - [-] Objective 1 - [X] SubObjective 1 - [ ] SubObjective 2 - [ ] Objective 2 - [ ] ... ``` Where the principles are usually links to principle documents and the objectives links to tasks. **Goal** Model what you want to be experiencing in various areas of your life one or two years from now. A `goals.org` file with a list of headings may work. **Vision** Aggregate group of goals under a three to five year time span common outcome. They help you think about bigger categories: life strategies, environmental trends, political context, career and lifestyle transition circumstances. I haven't reached this level of abstraction yet, so I'm not sure how to implement it. **Purpose and principles** The purpose defines the reason and meaning of your existence, principles define your morals, the parameters of action and the criteria for excellence of conduct. These are the core definition of what you really are. Visions, goals, objectives, projects and tasks derive and lead towards them. As we increase in the level of abstraction we need more time and energy (both mental and willpower) to adjust the path, it may also mean that the invested efforts so far are not aligned with the new direction, so we may need to throw away some of the advances made. That's why we need to support those changes with a higher levels of analysis and thought. feat(vim_autosave): Autosave in vim To automatically save your changes in NeoVim you can use the [auto-save](https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#%EF%B8%8F-configuration) plugin. It has some nice features - Automatically save your changes so the world doesn't collapse - Highly customizable: - Conditionals to assert whether to save or not - Execution message (it can be dimmed and personalized) - Events that trigger auto-save - Debounce the save with a delay - Hook into the lifecycle with autocommands - Automatically clean the message area **[Installation ](https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#-installation)** ```lua { "okuuva/auto-save.nvim", cmd = "ASToggle", -- optional for lazy loading on command event = { "InsertLeave", "TextChanged" }, -- optional for lazy loading on trigger events opts = { -- your config goes here -- https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#%EF%B8%8F-configuration execution_message = { enabled = false, }, }, }, ``` feat(vim_plugin_development): Record a good example of a simple plugin Check [org-checkbox](https://github.com/massix/org-checkbox.nvim/blob/trunk/lua/orgcheckbox/init.lua) to see a simple one feat(wordpress#Interact with Wordpress with Python): Interact with Wordpress with Python Read [this article](https://robingeuens.com/blog/python-wordpress-api/) feat(yt-dlp): Install yt-dlp ```bash pipx install --pip-args='--pre' yt-dlp ``` feat(zalando_postgres_operator): …
System information
Describe the problem you're observing
The bug happened after a nightly syncoid run (that uses zfs send and zfs receive) which froze the filesystem until the system was rebooted.
Describe how to reproduce the problem
I could not reproduce the bug on a newer kernel/zfs version (kernel 5.4.106-1-pve, zfs 2.0.4-pve1), but the bug happens only sometimes, so I'll see over time if it still occurs.
Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: