Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: kernel NULL pointer dereference in zap_lockdir #11804

Open
jjakob opened this issue Mar 26, 2021 · 5 comments
Open

BUG: kernel NULL pointer dereference in zap_lockdir #11804

jjakob opened this issue Mar 26, 2021 · 5 comments
Labels
Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@jjakob
Copy link

jjakob commented Mar 26, 2021

System information

Type Version/Name
Distribution Name Proxmox VE
Distribution Version 6
Linux Kernel 5.4.103-1-pve
Architecture x86_64
ZFS Version 2.0.3-pve
SPL Version 2.0.3-pve2

Describe the problem you're observing

The bug happened after a nightly syncoid run (that uses zfs send and zfs receive) which froze the filesystem until the system was rebooted.

Describe how to reproduce the problem

I could not reproduce the bug on a newer kernel/zfs version (kernel 5.4.106-1-pve, zfs 2.0.4-pve1), but the bug happens only sometimes, so I'll see over time if it still occurs.

Include any warning/errors/backtraces from the system logs

BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] SMP NOPTI
CPU: 0 PID: 21353 Comm: zfs Tainted: P           O      5.4.103-1-pve #1
Hardware name: HPE ProLiant MicroServer Gen10/ProLiant MicroServer Gen10, BIOS 5.12 02/19/2020
RIP: 0010:zap_lockdir_impl+0x284/0x750 [zfs]
Code: d2 75 92 4d 89 b7 98 00 00 00 48 8b 75 80 48 89 df e8 50 2e f4 ff eb 97 48 8b 43 18 48 c7 c2 18 08 7a c0 31 f6 bf 28 01 00 00 <48> 8
48 89 8d 58 ff ff ff b9 a3 01 00 00 48 89 85
RSP: 0018:ffffb298d0f3b748 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff9d84afbc7980 RCX: 0000000000000000
RDX: ffffffffc07a0818 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffb298d0f3b800 R08: 0000000000000001 R09: 0000000000000000
R10: 2000000000000000 R11: 0000008000000000 R12: ffffb298d0f3b888
R13: 0000000000000002 R14: ffff9d84d2e4e800 R15: 0000000000000000
FS:  00007f1e2a3da7c0(0000) GS:ffff9d863aa00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 00000002580f8000 CR4: 00000000001406f0
Call Trace:
 zap_lockdir+0x8b/0xb0 [zfs]
 zap_lookup_norm+0x5d/0xc0 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0x9d/0x190 [zfs]
 dmu_send_impl+0xf4e/0x1500 [zfs]
 ? dsl_dataset_feature_is_active+0x50/0x50 [zfs]
 ? dnode_rele_and_unlock+0x68/0xe0 [zfs]
 ? dnode_rele+0x3b/0x40 [zfs]
 ? dbuf_rele_and_unlock+0x306/0x6a0 [zfs]
 ? dbuf_rele+0x3b/0x40 [zfs]
 ? dmu_buf_rele+0xe/0x10 [zfs]
 dmu_send_obj+0x245/0x350 [zfs]
 zfs_ioc_send+0x11d/0x2c0 [zfs]
 ? zfs_ioc_send+0x2c0/0x2c0 [zfs]
 zfsdev_ioctl_common+0x5b2/0x820 [zfs]
 ? __kmalloc_node+0x267/0x330
 ? spl_kmem_free+0x33/0x40 [spl]
 zfsdev_ioctl+0x54/0xe0 [zfs]
 do_vfs_ioctl+0xa9/0x640
 ? handle_mm_fault+0xc9/0x1f0
 ksys_ioctl+0x67/0x90
 __x64_sys_ioctl+0x1a/0x20
 do_syscall_64+0x57/0x190
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f1e2a2c5427
Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
RSP: 002b:00007ffd9295c0d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 000055fffc274350 RCX: 00007f1e2a2c5427
RDX: 00007ffd9295c110 RSI: 0000000000005a1c RDI: 0000000000000003
RBP: 00007ffd9295fb00 R08: 000055fffc2e1a50 R09: 000000000000000f
R10: 000055fffc274300 R11: 0000000000000246 R12: 00007ffd9295c110
R13: 000055fffc274360 R14: 0000000000000000 R15: 000000000001790f
Modules linked in: binfmt_misc dm_crypt twofish_generic twofish_avx_x86_64 twofish_x86_64_3way twofish_x86_64 twofish_common serpent_avx2
4 serpent_sse2_x86_64 serpent_generic algif_skcipher af_alg tcp_diag udp_diag inet_diag veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter bonding softdog nfnetlink_log nfnetlink amd64_edac_mod edac_mce_amd kvm_amd ccp kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd amdgpu cryptd glue_helper amd_iommu_v2 gpu_sched ttm pcspkr fam15h_power k10temp drm_kms_helper drm i2c_algo_bit fb_sys_fops syscopyarea sysfillrect sysimgblt 8250_dw mac_hid vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi jc42 sunrpc ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO)
) btrfs xor zstd_compress hid_generic raid6_pq libcrc32c
 usbhid hid uas usb_storage i2c_piix4 xhci_pci ehci_pci ehci_hcd xhci_hcd ahci tg3 libahci video
CR2: 0000000000000000
---[ end trace a5fc98b08b735453 ]---
RIP: 0010:zap_lockdir_impl+0x284/0x750 [zfs]
Code: d2 75 92 4d 89 b7 98 00 00 00 48 8b 75 80 48 89 df e8 50 2e f4 ff eb 97 48 8b 43 18 48 c7 c2 18 08 7a c0 31 f6 bf 28 01 00 00 <48> 8b 08 48 8b 40 08 48 89 8d 58 ff ff ff b9 a3 01 00 00 48 89 85
RSP: 0018:ffffb298d0f3b748 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff9d84afbc7980 RCX: 0000000000000000
RDX: ffffffffc07a0818 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffb298d0f3b800 R08: 0000000000000001 R09: 0000000000000000
R10: 2000000000000000 R11: 0000008000000000 R12: ffffb298d0f3b888
R13: 0000000000000002 R14: ffff9d84d2e4e800 R15: 0000000000000000
FS:  00007f1e2a3da7c0(0000) GS:ffff9d863aa00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 00000002580f8000 CR4: 00000000001406f0
@jjakob jjakob added Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang) labels Mar 26, 2021
@brenc
Copy link

brenc commented Sep 24, 2021

This happened to me this morning on a recently built Promox VE 7 server running sanoid/syncoid every minute. FreeNAS Mini server which uses an ASRock Rack C2750D4I. I've had two of these servers running on FreeBSD 12.2 for years w/o issue. Just recently converted them over to Proxmox/ZFS on Linux.

Almost all zfs/zpool commands froze completely (ctrl-c did not work) though I was able to run a zpool status. I have three pools and for some reason my main storage pool reported this:

  pool: storage
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: resilvered 1.45T in 05:41:01 with 0 errors on Mon Sep 13 21:55:24 2021
config:

        NAME                            STATE     READ WRITE CKSUM
        storage                         ONLINE       0     0     0
          raidz2-0                      ONLINE       0     0     0
            ata-WDC_WD60EFZX-68B3FN0_1  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68MYMN1_2  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_3  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68MYMN1_4  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_5  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_6  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_7  ONLINE       0     0     0
            ata-WDC_WD60EFRX-68MYMN1_8  ONLINE       0     0     0

errors: 1 data errors, use '-v' for a list

This was a freshly built pool as of a few weeks ago. The only thing I've done to it since I built it was to swap out what I found out to be an SMR drive (all drives are now CMR). As you can see, the resilver completed successfully. No errors found.

The filesystems could still be accessed, though accessing a snapshot directory hung the ls command.

When I ran zpool status -v it hung indefinitely. I had to reboot the box. When the box came back, zpool status reported no errors whatsoever. I'm running a scrub now.

I could find no other errors in any of the logs.

# uname -a
Linux vmh01 5.11.22-3-pve #1 SMP PVE 5.11.22-7 (Wed, 18 Aug 2021 15:06:12 +0200) x86_64 GNU/Linux

# zfs version
zfs-2.0.5-pve1
zfs-kmod-2.0.5-pve1
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0 
SMP PTI
CPU: 2 PID: 2767864 Comm: zfs Tainted: P           O      5.11.22-3-pve #1
Hardware name: iXsystems FREENAS-MINI-2.0/C2750D4I, BIOS P3.20 03/26/2018
RIP: 0010:zap_lockdir_impl+0x2a6/0x7b0 [zfs]
Code: 86 d8 00 00 00 00 00 00 00 e8 f6 7e f3 e2 4d 89 ae 98 00 00 00 e9 47 fe ff ff 48 8b 43 18 b9 a3 01 00 00 31 f6 bf 28 01 00 00 <48> 8b 10 48 8b 40 08 48 89 95 58 ff ff ff 48 c7 c2 18 65 8a c0 48
RSP: 0018:ffffa5b24dcfb830 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8db0fe204d80 RCX: 00000000000001a3
RDX: ffff8db056839700 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffa5b24dcfb8f0 R08: 0000000000000001 R09: 0000000000000000
R10: 2000000000000000 R11: 0000008000000000 R12: 0000000000000002
R13: ffff8db1e3d64800 R14: 0000000000000000 R15: ffffa5b24dcfb968
FS:  00007fa1d9c747c0(0000) GS:ffff8db69fc80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000058b98e000 CR4: 00000000001026e0
Call Trace:
 ? dbuf_read+0x128/0x550 [zfs]
 zap_lockdir+0x8c/0xb0 [zfs]
 zap_lookup+0x50/0x100 [zfs]
 zfs_get_zplprop+0xb7/0x1a0 [zfs]
 dmu_send_impl+0xf01/0x1550 [zfs]
 ? dnode_rele+0x3d/0x50 [zfs]
 ? dbuf_rele_and_unlock+0x1fd/0x610 [zfs]
 dmu_send_obj+0x24c/0x360 [zfs]
 zfs_ioc_send+0x11d/0x2c0 [zfs]
 ? zfs_ioc_send+0x2c0/0x2c0 [zfs]
 zfsdev_ioctl_common+0x72c/0x940 [zfs]
 ? __check_object_size+0x5d/0x150
 zfsdev_ioctl+0x57/0xe0 [zfs]
 __x64_sys_ioctl+0x91/0xc0
 do_syscall_64+0x38/0x90
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7fa1da24ecc7
Code: 00 00 00 48 8b 05 c9 91 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 99 91 0c 00 f7 d8 64 89 01 48
RSP: 002b:00007ffdc3efca18 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 000056233d62caa0 RCX: 00007fa1da24ecc7
RDX: 00007ffdc3efca50 RSI: 0000000000005a1c RDI: 0000000000000003
RBP: 00007ffdc3f00440 R08: 000000000000000f R09: 0000000000000028
R10: 0000000000000080 R11: 0000000000000246 R12: 00007ffdc3efca50
R13: 00007ffdc3f05270 R14: 000056233d62cab0 R15: 0000000000000000
Modules linked in: tcp_diag inet_diag veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables sctp ip6_udp_tunnel udp_tunnel iptable_filter bpfilter softdog nfnetlink_log nfnetlink ipmi_ssif intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper intel_cstate pcspkr at24 joydev input_leds ast drm_vram_helper drm_ttm_helper ttm drm_kms_helper cec rc_core fb_sys_fops syscopyarea sysfillrect sysimgblt acpi_ipmi ipmi_si ipmi_devintf ipmi_msghandler mac_hid vhost_net vhost vhost_iotlb tap ib_iser rdma_cm iw_cm ib_cm ib_core nfsd iscsi_tcp libiscsi_tcp libiscsi auth_rpcgss scsi_transport_iscsi nfs_acl lockd grace drm sunrpc nfs_ssc ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs blake2b_generic xor raid6_pq libcrc32c hid_generic usbkbd usbmouse usbhid hid gpio_ich mpt3sas i2c_i801 ehci_pci igb raid_class
 ahci i2c_algo_bit crc32_pclmul i2c_smbus lpc_ich ehci_hcd i2c_ismt dca libahci scsi_transport_sas
CR2: 0000000000000000
---[ end trace bfdfd0e448c4e64f ]---
RIP: 0010:zap_lockdir_impl+0x2a6/0x7b0 [zfs]
Code: 86 d8 00 00 00 00 00 00 00 e8 f6 7e f3 e2 4d 89 ae 98 00 00 00 e9 47 fe ff ff 48 8b 43 18 b9 a3 01 00 00 31 f6 bf 28 01 00 00 <48> 8b 10 48 8b 40 08 48 89 95 58 ff ff ff 48 c7 c2 18 65 8a c0 48
RSP: 0018:ffffa5b24dcfb830 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8db0fe204d80 RCX: 00000000000001a3
RDX: ffff8db056839700 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffa5b24dcfb8f0 R08: 0000000000000001 R09: 0000000000000000
R10: 2000000000000000 R11: 0000008000000000 R12: 0000000000000002
R13: ffff8db1e3d64800 R14: 0000000000000000 R15: ffffa5b24dcfb968
FS:  00007fa1d9c747c0(0000) GS:ffff8db69fc80000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000058b98e000 CR4: 00000000001026e0

@stale
Copy link

stale bot commented Sep 24, 2022

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Sep 24, 2022
@zig
Copy link

zig commented Oct 13, 2022

Hi, I seem to have hit a similar issue while doing send. The system is ubuntu 22.04 with stock ZFS (2.1.4-0ubuntu0.1), encryption is enabled.

BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0 
Oops: 0000 [#1] SMP NOPTI
CPU: 9 PID: 2985410 Comm: zfs Tainted: P           O      5.15.0-48-generic #54-Ubuntu
Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U2-2T, BIOS L2.02P 10/19/2020
RIP: 0010:mzap_open+0x37/0x330 [zfs]
Code: e5 41 57 49 89 f7 31 f6 41 56 41 55 41 54 53 48 89 d3 48 83 ec 10 48 8b 42 18 48 89 7d d0 48 c7 c2 a8 d3 e6 c0 bf 28 01 00 00 <4c> 8b 30 48 8b 40 08 48 89 45 c8 e8 89 1b 6a ff 48 c7 c2 0c 8d f2
RSP: 0018:ffffa6dd8ed937c8 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff8dc0d6897000 RCX: 00000000000001a1
RDX: ffffffffc0e6d3a8 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffa6dd8ed93800 R08: 0000000000000000 R09: 0020000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff8dbfc4cef800
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001
FS:  00007f60a5e8f7c0(0000) GS:ffff8dc67f640000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000b129fa003 CR4: 00000000003726e0
Call Trace:
 <TASK>
 zap_lockdir_impl+0x2cd/0x3a0 [zfs]
 zap_lockdir+0x92/0xb0 [zfs]
 zap_lookup_norm+0x5c/0xd0 [zfs]
 ? dnode_special_open+0x4d/0x90 [zfs]
 zap_lookup+0x16/0x20 [zfs]
 zfs_get_zplprop+0xb7/0x1b0 [zfs]
 setup_featureflags+0x21b/0x260 [zfs]
 dmu_send_impl+0xdd/0xbf0 [zfs]
 ? dnode_rele+0x39/0x50 [zfs]
 ? dmu_buf_rele+0xe/0x20 [zfs]
 ? zap_unlockdir+0x46/0x60 [zfs]
 dmu_send_obj+0x265/0x340 [zfs]
 zfs_ioc_send+0xe8/0x2c0 [zfs]
 ? dump_bytes_cb+0x30/0x30 [zfs]
 zfsdev_ioctl_common+0x683/0x740 [zfs]
 ? __check_object_size.part.0+0x4a/0x150
 ? _copy_from_user+0x2e/0x70
 zfsdev_ioctl+0x57/0xf0 [zfs]
 __x64_sys_ioctl+0x92/0xd0
 do_syscall_64+0x59/0xc0
 entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f60a670eaff
Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <41> 89 c0 3d 00 f0 ff ff 77 1f 48 8b 44 24 18 64 48 2b 04 25 28 00
RSP: 002b:00007ffe33c53e70 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 000055d027f23bc0 RCX: 00007f60a670eaff
RDX: 00007ffe33c54300 RSI: 0000000000005a1c RDI: 0000000000000003
RBP: 00007ffe33c578f0 R08: 000055d027f27ba0 R09: 0000000000000000
R10: 0000000000000009 R11: 0000000000000246 R12: 000055d027f22660
R13: 00007ffe33c54300 R14: 000055d027f23bd0 R15: 0000000000000000
 </TASK>
Modules linked in: tls nvme_fabrics xt_nat vhost_net vhost vhost_iotlb tap ufs qnx4 hfsplus hfs minix ntfs msdos jfs xfs xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp nft_compat nft_chain_nat nf_nat nf_conntr>
 i2c_algo_bit drm_ttm_helper ttm drm_kms_helper crct10dif_pclmul crc32_pclmul ghash_clmulni_intel syscopyarea sysfillrect sysimgblt fb_sys_fops aesni_intel cec rc_core crypto_simd cryptd ixgbe nvme intel_lpss_pci drm xfrm_algo i2c_i801 a>
CR2: 0000000000000000
---[ end trace 63e074585d96fd8a ]---
RIP: 0010:mzap_open+0x37/0x330 [zfs]
Code: e5 41 57 49 89 f7 31 f6 41 56 41 55 41 54 53 48 89 d3 48 83 ec 10 48 8b 42 18 48 89 7d d0 48 c7 c2 a8 d3 e6 c0 bf 28 01 00 00 <4c> 8b 30 48 8b 40 08 48 89 45 c8 e8 89 1b 6a ff 48 c7 c2 0c 8d f2
RSP: 0018:ffffa6dd8ed937c8 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff8dc0d6897000 RCX: 00000000000001a1
RDX: ffffffffc0e6d3a8 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffa6dd8ed93800 R08: 0000000000000000 R09: 0020000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff8dbfc4cef800
R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001
FS:  00007f60a5e8f7c0(0000) GS:ffff8dc67f640000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000b129fa003 CR4: 00000000003726e0

Four days ago I upgraded from ubuntu 20.04 with stock ZFS 0.8 which never had any issue in more than 2 years of heavy usage (snapshots of VM zvol devices sent to backup server every 15 minutes).

After the upgrade, I did a "zpool upgrade -a". Everything seemed OK until this morning where I found the machine stuck while doing a regular "send".

@stale stale bot removed the Status: Stale No recent activity for issue label Oct 13, 2022
@zig
Copy link

zig commented Oct 13, 2022

Additional information : for two days prior to the crash, syncoid was failing to send snapshots of a particular path, more precisely the receiver was closing because of "invalid backup stream":

sanoid[1559281]: Sending incremental rpool/ROOT/ubuntu_wrbx68/var/log@autosnap_2022-10-11_01:00:06_hourly ... syncoid_marseille1_XXXovh1_2022-10-11:01:02
:04-GMT00:00 (~ 27.7 MB):
sanoid[1562717]: warning: cannot send 'rpool/ROOT/ubuntu_wrbx68/var/log@syncoid_marseille1_XXXovh1_2022-10-11:01:00:08-GMT00:00': Input/output error
zed[1562857]: eid=16005 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=55451:0:0:869
zed[1562860]: eid=16006 class=data pool='rpool' priority=2 err=5 flags=0x180 bookmark=55451:0:0:12
zed[1562862]: eid=16007 class=authentication pool='rpool' priority=2 err=5 flags=0x80 bookmark=55451:0:0:12
zed[1562864]: eid=16008 class=data pool='rpool' priority=2 err=5 flags=0x80 bookmark=55451:0:0:12
sanoid[1562715]: cannot receive incremental stream: invalid backup stream
sanoid[1562719]: zstd: error 25 : Write error : Broken pipe (cannot write compressed block)
sanoid[1559281]: CRITICAL ERROR:  zfs send  -I 'rpool/ROOT/ubuntu_wrbx68/var/log'@'autosnap_2022-10-11_01:00:06_hourly' 'rpool/ROOT/ubuntu_wrbx68/var/log'@'syncoid_marseille1_XXXovh1_2022-10-11:01:02:04-GMT00:00' | pv -p -t -e -r -b -s 29023416 | zstd -3 | mbuffer  -q -s 128k -m 16M 2>/dev/null | ssh     -S /tmp/syncoid-ovh1backup@marseille1-1665450113 ovh1backup@marseille1 ' mbuffer  -q -s 128k -m 16M 2>/dev/null | zstd -dc | sudo zfs receive   -F '"'"'tank/CRYPT/OVH1/rpool/ROOT/ubuntu_wrbx68/var/log'"'"' 2>&1' failed: 256 at /usr/sbin/syncoid line 817.

Indeed, "zpool status" was reporting one snapshot of this path to be corrupted. I have since then started a scrub and while "zpool status" still reports a permanent error, it's now on an anonymous file and not anymore linked to the snapshot, furthermore syncoid does not fail anymore. Hopefully the interrupted "send" was the trigger of the kernel crash and I won't get it anymore anytime soon.

@Derkades
Copy link

Encountered this today during a raw encrypted zfs send/recv using syncoid. File operations still work, but zpool/zfs commands are unresponsive.

Debian bullseye, using ZFS 2.1.9 and Linux kernel 6.0.12 from backports

zfs-2.1.9-1~bpo11+1
zfs-kmod-2.1.9-1~bpo11+1
[72358.039249] BUG: kernel NULL pointer dereference, address: 0000000000000000
[72358.039266] #PF: supervisor read access in kernel mode
[72358.039273] #PF: error_code(0x0000) - not-present page
[72358.039280] PGD 0 P4D 0 
[72358.039286] Oops: 0000 [#1] PREEMPT SMP NOPTI
[72358.039293] CPU: 2 PID: 1039602 Comm: zfs Tainted: P           OE      6.0.0-0.deb11.6-amd64 #1  Debian 6.0.12-1~bpo11+1
[72358.039305] Hardware name: System manufacturer System Product Name/PRIME X370-PRO, BIOS 6042 04/28/2022
[72358.039315] RIP: 0010:zap_lockdir_impl+0x29e/0x750 [zfs]
[72358.039383] Code: 86 d8 00 00 00 00 00 00 00 e8 ce a6 14 db 4d 89 a6 98 00 00 00 e9 48 fe ff ff 48 8b 43 18 b9 a1 01 00 00 31 f6 bf 28 01 00 00 <48> 8b 10 48 8b 40 08 48 89 54 24 20 48 c7 c2 a8 58 69 c1 48 89 44
[72358.039398] RSP: 0018:ffffa970d57e3850 EFLAGS: 00010246
[72358.039406] RAX: 0000000000000000 RBX: ffff96927bb6c780 RCX: 00000000000001a1
[72358.039414] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000000000128
[72358.039422] RBP: 0000000000000002 R08: 0000000000000001 R09: 0000000000000000
[72358.039430] R10: 2000000000000000 R11: 0000008000000000 R12: ffff9692a17cb800
[72358.039439] R13: 0000000000000001 R14: 0000000000000000 R15: ffffa970d57e3980
[72358.039448] FS:  00007f218d70a7c0(0000) GS:ffff9697ae480000(0000) knlGS:0000000000000000
[72358.039457] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[72358.039464] CR2: 0000000000000000 CR3: 000000084dd04000 CR4: 0000000000750ee0
[72358.039472] PKRU: 55555554
[72358.039477] Call Trace:
[72358.039483]  <TASK>
[72358.039491]  ? zio_wait+0x1fc/0x2b0 [zfs]
[72358.039544]  zap_lockdir+0x90/0xc0 [zfs]
[72358.039584]  ? _raw_spin_unlock+0x15/0x30
[72358.039592]  zap_lookup+0x48/0x100 [zfs]
[72358.039629]  zfs_get_zplprop+0xb3/0x1a0 [zfs]
[72358.039671]  dmu_send_impl+0xe56/0x1450 [zfs]
[72358.039708]  ? _raw_spin_unlock_irqrestore+0x23/0x40
[72358.039716]  ? dsl_dataset_disown+0x90/0x90 [zfs]
[72358.039753]  ? taskq_dispatch_ent+0x101/0x1b0 [spl]
[72358.039763]  ? preempt_count_add+0x70/0xa0
[72358.039771]  ? _raw_spin_lock+0x13/0x40
[72358.039777]  ? _raw_spin_unlock+0x15/0x30
[72358.039783]  ? dbuf_rele_and_unlock+0x20a/0x690 [zfs]
[72358.039818]  dmu_send_obj+0x252/0x340 [zfs]
[72358.039852]  ? ttwu_queue_wakelist+0xbc/0x100
[72358.039862]  zfs_ioc_send+0xe8/0x2d0 [zfs]
[72358.039905]  ? zfs_ioc_send+0x2d0/0x2d0 [zfs]
[72358.039941]  zfsdev_ioctl_common+0x750/0x9b0 [zfs]
[72358.039977]  ? kmalloc_large_node+0x70/0x80
[72358.039985]  ? __kmalloc_node+0x2cc/0x390
[72358.039991]  zfsdev_ioctl+0x53/0xe0 [zfs]
[72358.040032]  __x64_sys_ioctl+0x8b/0xc0
[72358.040039]  do_syscall_64+0x3b/0xc0
[72358.040050]  entry_SYSCALL_64_after_hwframe+0x63/0xcd
[72358.040054] RIP: 0033:0x7f218d9f85f7
[72358.040059] Code: 00 00 00 48 8b 05 99 c8 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 69 c8 0d 00 f7 d8 64 89 01 48
[72358.040074] RSP: 002b:00007ffd2e8f2ba8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[72358.040083] RAX: ffffffffffffffda RBX: 0000563e40b1bcf0 RCX: 00007f218d9f85f7
[72358.040091] RDX: 00007ffd2e8f2be0 RSI: 0000000000005a1c RDI: 0000000000000003
[72358.040099] RBP: 00007ffd2e8f65d0 R08: 000000000000000f R09: 0000000000000000
[72358.040407] R10: 0000000000000009 R11: 0000000000000246 R12: 0000563e40ba7910
[72358.040673] R13: 00007ffd2e8f2be0 R14: 0000563e40b1bd00 R15: 0000000000000000
[72358.040917]  </TASK>
[72358.041144] Modules linked in: wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha sctp ip6_udp_tunnel udp_tunnel veth xt_nat xt_conntrack xt_MASQUERADE nf_conntrack_netlink xfrm_user xfrm_algo xt_addrtype br_netfilter macvtap macvlan vhost_net vhost vhost_iotlb tap tun ipt_REJECT nf_reject_ipv4 xt_tcpudp nft_compat nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink bridge lz4 lz4_compress zram stp llc overlay zsmalloc intel_rapl_msr intel_rapl_common amdgpu nls_ascii nls_cp437 vfat amd64_edac fat edac_mce_amd snd_usb_audio snd_hda_codec_hdmi gpu_sched drm_buddy snd_hda_intel zfs(POE) snd_usbmidi_lib drm_display_helper snd_intel_dspcfg snd_intel_sdw_acpi snd_rawmidi cec zunicode(POE) zzstd(OE) snd_hda_codec zlua(OE) kvm_amd snd_seq_device zavl(POE) rc_core mc snd_hda_core eeepc_wmi icp(POE) kvm snd_hwdep zcommon(POE) asus_wmi platform_profile battery ccp znvpair(POE) drm_ttm_helper snd_pcm
[72358.041177]  sparse_keymap ledtrig_audio sp5100_tco ttm rapl rfkill spl(OE) joydev wmi_bmof pcspkr mxm_wmi efi_pstore k10temp watchdog snd_timer rng_core drm_kms_helper snd sg soundcore evdev button drm fuse configfs efivarfs ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 dm_crypt dm_mod raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c crc32c_generic raid0 multipath linear vfio_pci vfio_pci_core irqbypass vfio_virqfd vfio_iommu_type1 vfio hid_generic raid1 usbhid md_mod hid crc32_pclmul crc32c_intel sd_mod ghash_clmulni_intel nvme nvme_core t10_pi xhci_pci ahci libahci xhci_hcd aesni_intel crc64_rocksoft_generic libata crypto_simd e1000e igb cryptd usbcore i2c_algo_bit scsi_mod dca crc64_rocksoft ptp crc_t10dif i2c_piix4 crct10dif_generic crct10dif_pclmul crc64 pps_core crct10dif_common usb_common scsi_common video gpio_amdpt wmi gpio_generic
[72358.043553] CR2: 0000000000000000
[72358.044164] ---[ end trace 0000000000000000 ]---
[72358.741371] RIP: 0010:zap_lockdir_impl+0x29e/0x750 [zfs]
[72358.741371] Code: 86 d8 00 00 00 00 00 00 00 e8 ce a6 14 db 4d 89 a6 98 00 00 00 e9 48 fe ff ff 48 8b 43 18 b9 a1 01 00 00 31 f6 bf 28 01 00 00 <48> 8b 10 48 8b 40 08 48 89 54 24 20 48 c7 c2 a8 58 69 c1 48 89 44
[72358.741371] RSP: 0018:ffffa970d57e3850 EFLAGS: 00010246
[72358.741371] RAX: 0000000000000000 RBX: ffff96927bb6c780 RCX: 00000000000001a1
[72358.741371] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 0000000000000128
[72358.741371] RBP: 0000000000000002 R08: 0000000000000001 R09: 0000000000000000
[72358.741371] R10: 2000000000000000 R11: 0000008000000000 R12: ffff9692a17cb800
[72358.741371] R13: 0000000000000001 R14: 0000000000000000 R15: ffffa970d57e3980
[72358.741371] FS:  00007f218d70a7c0(0000) GS:ffff9697ae480000(0000) knlGS:0000000000000000
[72358.741371] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[72358.741371] CR2: 0000000000000000 CR3: 000000084dd04000 CR4: 0000000000750ee0
[72358.741371] PKRU: 55555554

lyz-code added a commit to lyz-code/blue-book that referenced this issue Mar 5, 2024
One part of the web 3.0 is to be able to annotate and share comments on the web. This article is my best try to find a nice open source privacy friendly tool. Spoiler: there aren't any :P

The alternative I'm using so far is to process the data at the same time as I underline it.

- At the mobile/tablet you can split your screen and have Orgzly on one tab and the browser in the other. So that underlining, copy and paste doesn't break too much the workflow.
- At the eBook I underline it and post process it after.

The idea of using an underlining tool makes sense in the case to post process the content in a more efficient environment such as a laptop.

The use of Orgzly is kind of a preprocessing. If the underlining software can easily export the highlighted content along with the link to the source then it would be much quicker

The advantage of using Orgzly is also that it works today both online and offline and it is more privacy friendly.

On the post I review some of the existent solutions

feat(ansible_snippets#Avoid arbitrary disk mount): Avoid arbitrary disk mount

Instead of using `/dev/sda` use `/dev/disk/by-id/whatever`

feat(ansible_snippets#Get the user running ansible in the host ): Get the user running ansible in the host

If you `gather_facts` use the `ansible_user_id` variable.

feat(antiracism): Recomendar el episodio de podcast el diario de Jadiya

[Diario de Jadiya](https://deesonosehabla.com/episodios/episodio-2-jadiya/) ([link al archivo](https://dts.podtrac.com/redirect.mp3/dovetail.prxu.org/302/7fa33dd2-3f29-48f5-ad96-f6874909d9fb/Master_ep.2_Jadiya.mp3)): es algo que todo xenófobo debería de escuchar, es un diario de una chavala saharaui que formó parte del programa de veranos en familias españolas.

feat(bash_snippets#Self delete shell script): Self delete shell script

Add at the end of the script

```bash
rm -- "$0"
```

`$0` is a magic variable for the full path of the executed script.

feat(bash_snippets#Add a user to the sudoers through command line ): Add a user to the sudoers through command line

Add the user to the sudo group:

```bash
sudo  usermod -a -G sudo <username>
```

The change will take effect the next time the user logs in.

This works because `/etc/sudoers` is pre-configured to grant permissions to all members of this group (You should not have to make any changes to this):

```bash
%sudo   ALL=(ALL:ALL) ALL
```

feat(bash_snippets#Error management done well in bash): Error management done well in bash

If you wish to capture error management in bash you can use the next format

```bash
if ( ! echo "$EMAIL" >> "$USER_TOTP_FILE" ) then
	echo "** Error: could not associate email for user $USERNAME"
	exit 1
fi
```

feat(bats): Introduce bats

Bash Automated Testing System is a TAP-compliant testing framework for Bash 3.2 or above. It provides a simple way to verify that the UNIX programs you write behave as expected.

A Bats test file is a Bash script with special syntax for defining test cases. Under the hood, each test case is just a function with a description.

```bash

@test "addition using bc" {
  result="$(echo 2+2 | bc)"
  [ "$result" -eq 4 ]
}

@test "addition using dc" {
  result="$(echo 2 2+p | dc)"
  [ "$result" -eq 4 ]
}
```

Bats is most useful when testing software written in Bash, but you can use it to test any UNIX program.

References:
- [Source](https://github.com/bats-core/bats-core)
- [Docs](https://bats-core.readthedocs.io/)

feat(calendar_management#Calendar event notification system): Add calendar event notification system tool

Set up a system that notifies you when the next calendar event is about to start to avoid spending mental load on it and to reduce the possibilities of missing the event.

I've created a small tool that:

- Tells me the number of [pomodoros](task_tools.md#pomodoro) that I have until the next event.
- Once a pomodoro finishes it makes me focus on the amount left so that I can prepare for the event
- Catches my attention when the event is starting.

feat(python_snippets#Fix variable is unbound pyright error): Fix variable is unbound pyright error

You may receive these warnings if you set variables inside if or try/except blocks such as the next one:

```python
  def x():
    y = True
    if y:
        a = 1
    print(a)  # "a" is possibly unbound
```

The easy fix is to set `a = None` outside those blocks

```python
  def x():
    a = None
    y = True
    if y:
        a = 1
    print(a)  # "a" is possibly unbound
```

feat(detox): Introduce detox

detox cleans up filenames from the command line.

Installation:

```bash
apt-get install detox
```

Usage:

```bash
detox *
```

feat(aws#Get the role used by the instance): Get the role used by the instance
```bash
aws sts get-caller-identity
{
    "UserId": "AIDAxxx",
    "Account": "xxx",
    "Arn": "arn:aws:iam::xxx:user/Tyrone321"
}
```

You can then take the role name, and query IAM for the role details using both `iam list-role-policies` for inline policies and `iam-list-attached-role-policies` for attached managed policies (thanks to @Dimitry K for the callout).

$ aws iam list-attached-role-policies --role-name Tyrone321
{
  "AttachedPolicies": [
  {
    "PolicyName": "SomePolicy",
    "PolicyArn": "arn:aws:iam::aws:policy/xxx"
  },
  {
    "PolicyName": "AnotherPolicy",
    "PolicyArn": "arn:aws:iam::aws:policy/xxx"
  } ]
}

To get the actual IAM permissions, use aws iam get-policy to get the default policy version ID, and then aws iam get-policy-version with the version ID to retrieve the actual policy statements. If the IAM principal is a user, the commands are aws iam list-attached-user-policies and aws iam get-user-policy.

feat(kubectl#namespaces): Improve the way to manage kubernetes namespaces

Temporary set the namespace for a request:

```bash
kubectl -n {{ namespace_name }} {{ command_to_execute }}
kubectl --namespace={{ namespace_name }} {{ command_to_execute }}
```

Permanently set the namespace for a request:

```bash
kubectl config set-context --current --namespace={{ namespace_name }}
```

To make things easier you can set an alias:

```bash
alias kn='kubectl config set-context --current --namespace '
```

To unset the namespace use `kubectl config set-context --current --namespace=""`

fix(kubernetes_jobs#the-new-way): Improve the Cronjob monitorization expression

```yaml
- alert: CronJobStatusFailed
  expr: kube_cronjob_status_last_successful_time{exported_namespace!=""} - kube_cronjob_status_last_schedule_time < 0
  for: 5m
  annotations:
    description: |
      '{{ $labels.cronjob }} at {{ $labels.exported_namespace }} namespace last run hasn't been successful for {{ value }} seconds.'
```

feat(digital_garden#link-rot): Manage link rot

Link rot occurs when hyperlinks become obsolete or broken, leading to content loss or diminished user experience. Here are some ways to mitigate link rot in digital gardens:

- Use Permalinks: Ensure that your digital garden software supports permanent URLs (permalinks) for each note or idea. Permalinks make it easier to reference and maintain links over time because they remain stable even if the underlying content changes. This is uncomfortable to do unless your editor supports it transparently.
- Regularly Update Links: You can check for broken or outdated links and replacing them with current references through by using automated link checkers.
- Implement Redirects: When restructuring your digital garden or moving content to different locations, set up redirects for old URLs to ensure that visitors are directed to the new location. This prevents link rot and maintains the continuity of your digital garden. I don't do it as I haven't found a way to automatically doing this.
- [Archive External Content](#archive-external-content): When linking to external websites or resources, consider using web archiving services to create snapshots or archives of the content. This ensures that even if the original content becomes unavailable, visitors can still access archived versions. Check [the section below](#archive-external-content) for more information

feat(docker#Using the json driver): Monitor logs with json driver

This is the cleanest way to do it in my opinion. First configure `docker` to output the logs as json by adding to `/etc/docker/daemon.json`:

```json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
```

Then use [promtail's `docker_sd_configs`](promtail.md#scrape-docker-logs).

feat(wireguard#installation): Introduce wireguard installation

WireGuard is available from the default repositories. To install it, run the following commands:

```bash
sudo apt install wireguard
```

The `wg` and `wg-quick` command-line tools allow you to configure and manage the WireGuard interfaces.

Each device in the WireGuard VPN network needs to have a private and public key. Run the following command to generate the key pair:

```bash
wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey
```

The files will be generated in the `/etc/wireguard` directory.

Wireguard also supports a pre-shared key, which adds an additional layer of symmetric-key cryptography. This key is optional and must be unique for each peer pair.

The next step is to configure the tunnel device that will route the VPN traffic.

The device can be set up either from the command line using the `ip` and `wg` commands, or by creating the configuration file with a text editor.

Create a new file named `wg0.conf` and add the following contents:

```bash
sudo nano /etc/wireguard/wg0.conf
```

```ini
[Interface]
Address = 10.0.0.1/24
SaveConfig = true
ListenPort = 51820
PrivateKey = SERVER_PRIVATE_KEY
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE
```

The interface can be named anything, however it is recommended to use something like `wg0` or `wgvpn0`. The settings in the interface section have the following meaning:

- Address: A comma-separated list of v4 or v6 IP addresses for the wg0 interface. Use IPs from a range that is reserved for private networks (10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16).
- ListenPort - The listening port.
- PrivateKey - A private key generated by the `wg genkey` command. (To see the contents of the file type: `sudo cat /etc/wireguard/privatekey`)
- SaveConfig - When set to true, the current state of the interface is saved to the configuration file when shutdown.
- PostUp - Command or script that is executed before bringing the interface up. In this example, we’re using iptables to enable masquerading. This allows traffic to leave the server, giving the VPN clients access to the Internet.

  Make sure to replace `ens3` after `-A POSTROUTING` to match the name of your public network interface. You can easily find the interface with:

  ```bash
  ip -o -4 route show to default | awk '{print $5}'
  ```

- PostDown - command or script which is executed before bringing the interface down. The iptables rules will be removed once the interface is down.

The `wg0.conf` and `privatekey` files should not be readable to normal users. Use `chmod` to set the permissions to `600`:

```bash
sudo chmod 600 /etc/wireguard/{privatekey,wg0.conf}
```

Once done, bring the `wg0` interface up using the attributes specified in the configuration file:

```bash
sudo wg-quick up wg0
```

The command will produce an output similar to the following:

```bash
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.0.1/24 dev wg0
[#] ip link set mtu 1420 up dev wg0
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
```

To check the interface state and configuration, enter:

```bash
sudo wg show wg0

interface: wg0
  public key: r3imyh3MCYggaZACmkx+CxlD6uAmICI8pe/PGq8+qCg=
  private key: (hidden)
```

You can also run `ip` a show `wg0` to verify the interface state:

```bash
ip a show wg0

4: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.0.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever
```

WireGuard can also be managed with Systemd.

To bring the WireGuard interface at boot time, run the following command:

```bash
sudo systemctl enable wg-quick@wg0
```

IP forwarding must be enabled for NAT to work. Open the `/etc/sysctl.conf` file and add or uncomment the following line:

```bash
sudo vi /etc/sysctl.conf
```

```ini
net.ipv4.ip_forward=1
```

Save the file and apply the change:

```bash
sudo sysctl -p

net.ipv4.ip_forward = 1
```

If you are using UFW to manage your firewall you need to open UDP traffic on port 51820:

```bash
sudo ufw allow 51820/udp
```

Also install `wireguard` in your clients. The process for setting up a client is pretty much the same as you did for the server.

If the client is on Android, [the official app](https://www.wireguard.com/install/) is not on F-droid, but you can get it through the Aurora store

First generate the public and private keys:

```bash
wg genkey | sudo tee /etc/wireguard/privatekey | wg pubkey | sudo tee /etc/wireguard/publickey
```

Create the file `wg0.conf` and add the following contents:

```bash
sudo vi /etc/wireguard/wg0.conf
```
```ini
[Interface]
PrivateKey = CLIENT_PRIVATE_KEY
Address = 10.0.0.2/24

[Peer]
PublicKey = SERVER_PUBLIC_KEY
Endpoint = SERVER_IP_ADDRESS:51820
AllowedIPs = 0.0.0.0/0
```

The settings in the interface section have the same meaning as when setting up the server:

- Address: A comma-separated list of v4 or v6 IP addresses for the `wg0` interface.
- PrivateKey: To see the contents of the file on the client machine run: `sudo cat /etc/wireguard/privatekey`

The peer section contains the following fields:

- PublicKey: A public key of the peer you want to connect to. (The contents of the server’s `/etc/wireguard/publickey` file).
- Endpoint: An IP or hostname of the peer you want to connect to followed by a colon, and then a port number on which the remote peer listens to.
- AllowedIPs: A comma-separated list of v4 or v6 IP addresses from which incoming traffic for the peer is allowed and to which outgoing traffic for this peer is directed. We’re using 0.0.0.0/0 because we are routing the traffic and want the server peer to send packets with any source IP.

If you need to configure additional clients, just repeat the same steps using a different private IP address.

The last step is to add the client’s public key and IP address to the server. To do that, run the following command on the Ubuntu server:

```bash
sudo wg set wg0 peer CLIENT_PUBLIC_KEY allowed-ips 10.0.0.2
```

Make sure to change the `CLIENT_PUBLIC_KEY` with the public key you generated on the client machine (`sudo cat /etc/wireguard/publickey`) and adjust the client IP address if it is different.

Once done, go back to the client machine and bring up the tunneling interface.

```bash
sudo wg-quick up wg0
```

Now you should be connected to the Ubuntu server, and the traffic from your client machine should be routed through it. You can check the connection with:

```bash
sudo wg

interface: wg0
  public key: gFeK6A16ncnT1FG6fJhOCMPMeY4hZa97cZCNWis7cSo=
  private key: (hidden)
  listening port: 53527
  fwmark: 0xca6c

peer: r3imyh3MCYggaZACmkx+CxlD6uAmICI8pe/PGq8+qCg=
  endpoint: XXX.XXX.XXX.XXX:51820
  allowed ips: 0.0.0.0/0
  latest handshake: 53 seconds ago
  transfer: 3.23 KiB received, 3.50 KiB sent
```

You can also open your browser, type “what is my ip”, and you should see your server IP address.

To stop the tunneling, bring down the wg0 interface:

```bash
sudo wg-quick down wg0
```

feat(wireguard#Allow the access to the local network): Allow the access to the local network

If you want to let the peer access a server of your local network you could add it to the `allowed-ips`.

```bash
sudo wg set wg0 peer CLIENT_PUBLIC_KEY allowed-ips 10.0.0.2,192.168.3.123/32
```

Then you need to add the routes:

```bash
ip route add 192.168.3.123 dev wg0
```
feat(wireguard#Remove a peer): Remove a peer

```bash
wg show (find the peer, note the interface and peer key)
wg set <interface> peer <key> remove
```

feat(zfs#Troubleshooting): Troubleshooting general guidelines

To debug ZFS errors you can check:

- The generic kernel logs: `dmesg -T`, `/var/log/syslog` or where kernel log messages are sent.
- ZFS Kernel Module Debug Messages: The ZFS kernel modules use an internal log buffer for detailed logging information. This log information is available in the pseudo file `/proc/spl/kstat/zfs/dbgmsg` for ZFS builds where ZFS module parameter `zfs_dbgmsg_enable = 1`

feat(zfs#ZFS pool is stuck): ZFS pool is stuck troubleshoot

Symptom: zfs or zpool command appear hung, does not return, and is not killable

Likely cause: kernel thread hung or panic

If a kernel thread is stuck, then a backtrace of the stuck thread can be in the logs. In some cases, the stuck thread is not logged until the deadman timer expires.

The only way I've yet found to solve this is rebooting the machine (not ideal). I even have to use the magic keys -.- .

feat(zfs#kernel NULL pointer dereference in zap_lockdir): kernel NULL pointer dereference in zap_lockdir troubleshoot

There are many issues open with this behaviour: [1](https://github.com/openzfs/zfs/issues/11804), [2](https://github.com/openzfs/zfs/issues/6639)

In my case I feel it happens when running `syncoid` to send the backups to the backup server.

feat(linux_snippets#Make a file executable in a git repository ): Make a file executable in a git repository
```bash
git add entrypoint.sh
git update-index --chmod=+x entrypoint.sh
```

feat(linux_snippets#Configure autologin in Debian with Gnome): Configure autologin in Debian with Gnome

Edit the `/etc/gdm3/daemon.conf` file and include:

```ini
AutomaticLoginEnable = true
AutomaticLogin = <your user>
```

feat(linux_snippets#See errors in the journalctl ): See errors in the journalctl

To get all errors for running services using journalctl:

```bash
journalctl -p 3 -xb
```
where `-p 3` means priority err, `-x` provides extra message information, and `-b` means since last boot.

feat(linux_snippets#Fix rsyslog builtin:omfile suspended error): Fix rsyslog builtin:omfile suspended error

It may be a permissions error. I have not been able to pinpoint the reason behind it.

What did solve it though is to remove the [aledgely deprecated paramenters](https://www.rsyslog.com/doc/configuration/modules/omfile.html) from `/etc/rsyslog.conf`:

```
```

I hope that as they are the default parameters, they don't need to be set.

feat(loki#Configure alerts and rules): Configure alerts and rules

Grafana Loki includes a component called the ruler. The ruler is responsible for continually evaluating a set of configurable queries and performing an action based on the result.

This example configuration sources rules from a local disk.

```yaml
ruler:
  storage:
    type: local
    local:
      directory: /tmp/rules
  rule_path: /tmp/scratch
  alertmanager_url: http://localhost
  ring:
    kvstore:
      store: inmemory
  enable_api: true
```

There are two kinds of rules: alerting rules and recording rules.

Alerting rules allow you to define alert conditions based on LogQL expression language expressions and to send notifications about firing alerts to an external service.

A complete example of a rules file:

```yaml
groups:
  - name: should_fire
    rules:
      - alert: HighPercentageError
        expr: |
          sum(rate({app="foo", env="production"} |= "error" [5m])) by (job)
            /
          sum(rate({app="foo", env="production"}[5m])) by (job)
            > 0.05
        for: 10m
        labels:
            severity: page
        annotations:
            summary: High request latency
  - name: credentials_leak
    rules:
      - alert: http-credentials-leaked
        annotations:
          message: "{{ $labels.job }} is leaking http basic auth credentials."
        expr: 'sum by (cluster, job, pod) (count_over_time({namespace="prod"} |~ "http(s?)://(\\w+):(\\w+)@" [5m]) > 0)'
        for: 10m
        labels:
          severity: critical
```

Recording rules allow you to precompute frequently needed or computationally expensive expressions and save their result as a new set of time series.

Querying the precomputed result will then often be much faster than executing the original expression every time it is needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh.

Loki allows you to run metric queries over your logs, which means that you can derive a numeric aggregation from your logs, like calculating the number of requests over time from your NGINX access log.

```yaml
name: NginxRules
interval: 1m
rules:
  - record: nginx:requests:rate1m
    expr: |
      sum(
        rate({container="nginx"}[1m])
      )
    labels:
      cluster: "us-central1"
```
This query (`expr`) will be executed every 1 minute (`interval`), the result of which will be stored in the metric name we have defined (`record`). This metric named `nginx:requests:rate1m` can now be sent to Prometheus, where it will be stored just like any other metric.

Here is an example remote-write configuration for sending to a local Prometheus instance:

```yaml
ruler:
  ... other settings ...

  remote_write:
    enabled: true
    client:
      url: http://localhost:9090/api/v1/write
```

feat(matrix): Review matrix servers

[Matrix](https://wiki.archlinux.org/index.php/Matrix) is a FLOSS protocol for open federated instant messaging. Matrix ecosystem consists of many servers which can be used for registration.

Choose a server that doesn't engage in chaotic account or room purges. Being on such a homeserver is no different from being on Discord. If a homeserver has rules, read them to check if they're unreasonably strict. Keep an eye on the usual things that tend to stink. For example, if a homeserver is trying to suppress certain political opinions, restrict you from posting certain types of content or otherwise impose authoritarian environment.

https://view.matrix.org/ gives an overview of the most used channels and on each of them you can see what do the people use. Looking at meaningful channels:

- [Arch linux](https://view.matrix.org/room/!GtIfdsfQtQIgbQSxwJ:archlinux.org/servers)
- GrapheneOs: [1](https://view.matrix.org/room/!lAoVmVifHHtoeOAmHO:grapheneos.org/servers), [2](https://view.matrix.org/room/!UVEsOAdphEMYhxzTah:grapheneos.org/servers)
- [Techlore](https://view.matrix.org/room/!zjYxZkVEqwWcQQhXxc:techlore.net/servers)

You can say that the most used (after matrix.org) are:

- envs.net: They have [an element page](https://element.envs.net/#/welcome), I don't find any political statement
- tchncs.de: Run by [an individual](https://tchncs.de/), I don't find any political statement either.
- t2bot.io: [It's for bots](https://t2bot.io/) so nope.
- nitro.chat: Run by [Nitrokey](https://www.nitrokey.com/about) is the world-leading company in open source security hardware. Nitrokey develops IT security hardware for data encryption, key management and user authentication, as well as secure network devices, PCs, laptops and smartphones. The company has been founded in Berlin, Germany in 2015 and can already count ten thousands of users from more than 120 countries including numerous well-known international enterprises from various industries. It lists Amazon, nvidia, Ford, Google between their clients -> Nooope!

[Tatsumoto](https://tatsumoto-ren.github.io/blog/list-of-matrix-servers.html) doesn't recommend some of them saying that the admin blocked rooms in the pursuit of censorship, but he also references a page that looks pretty Zionist so I'm not sure rick... Maybe that censorship means they are good servers xD.

feat(memoria_historica#Terrorismo de estado en Euskadi): Recomendar el documental Carpetas azules

Sobre el terrorismo de estado en Euskadi

feat(orgmode#Reload the agenda con any file change): Reload the agenda con any file change
There are two ways of doing this:

- Reload the agenda each time you save a document
- Reload the agenda each X seconds

Reload the agenda each time you save a document:

Add this to your configuration:

```lua
vim.api.nvim_create_autocmd('BufWritePost', {
  pattern = '*.org',
  callback = function()
    local bufnr = vim.fn.bufnr('orgagenda') or -1
    if bufnr > -1 then
      require('orgmode').agenda:redo()
    end
  end
})
```

This will reload agenda window if it's open each time you write any org file, it won't work if you archive without saving though yet. But that can be easily fixed if you use [the auto-save plugin](vim_autosave.md).

Reload the agenda each X seconds:

Add this to your configuration:

```lua
vim.api.nvim_create_autocmd("FileType", {
  pattern = "org",
  group = vim.api.nvim_create_augroup("orgmode", { clear = true }),
  callback = function()
    -- Reload the agenda each second if its opened so that unsaved changes
    -- in the files are shown
    local timer = vim.loop.new_timer()
    timer:start(
      0,
      10000,
      vim.schedule_wrap(function()
        local bufnr = vim.fn.bufnr("orgagenda") or -1
        if bufnr > -1 then
          require("orgmode").agenda:redo(true)
        end
      end)
    )
  end,
})
```

feat(orgmode#links): Introduce the use of links

Orgmode supports the insertion of links with the `org_insert_link` and `org_store_link` commands. I've changed the default `<leader>oli` and `<leader>ols` bindings to some quicker ones:

```lua
mappings = {
  org = {
    -- link management
    org_insert_link = "<leader>l",
    org_store_link = "<leader>ls",
  },
}
```
There are the next possible workflows:

- Discover links as you go: If you more less know in which file are the headings you want to link:
  - Start the link helper with `<leader>l`
  - Type `file:./` and press `<tab>`, this will show you the available files. Select the one you want
  - Then type `::*` and press `<tab>` again to get the list of available headings.
- Store the links you want to paste:
  - Go to the heading you want to link
  - Press `<leader>ls` to store the link
  - Go to the place where you want to paste the link
  - Press `<leader>l` and then `<tab>` to iterate over the saved links.

feat(orgmode#Convert source code in the fly from markdown to orgmode): Convert source code in the fly from markdown to orgmode

It would be awesome that when you do `nvim myfile.md` it automatically converts it to orgmode so that you can use all the power of it and once you save the file it converts it back to markdown

I've started playing around with this but got nowhere. I leave you my breadcrumbs in case you want to follow this path.

```lua
-- Load the markdown documents as orgmode documents
vim.api.nvim_create_autocmd("BufReadPost", {
  pattern = "*.md",
  callback = function()
    local markdown_file = vim.fn.expand("%:p")
    local org_content = vim.fn.system("pandoc -f gfm -t org " .. markdown_file)
    vim.cmd("%delete")
    vim.api.nvim_put(vim.fn.split(org_content, "\n"), "l", false, true)
    vim.bo.filetype = "org"
  end,
})
```

If you make it work please [tell me how you did it!](contact.md)

feat(postgres#Operations): Postgres Operations

[Restore a dump](https://www.postgresql.org/docs/current/backup-dump.html#BACKUP-DUMP-RESTORE)

Text files created by `pg_dump` are intended to be read in by the `psql` program. The general command form to restore a dump is

```bash
psql dbname < dumpfile
```

Where `dumpfile` is the file output by the `pg_dump` command. The database `dbname` will not be created by this command, so you must create it yourself from `template0` before executing `psql` (e.g., with `createdb -T template0 dbname`). `psql` supports options similar to `pg_dump` for specifying the database server to connect to and the user name to use. See [the `psql` reference page](https://www.postgresql.org/docs/current/app-psql.html) for more information. Non-text file dumps are restored using the `pg_restore` utility.

feat(postgres#Fix pg_dump version mismatch): Fix pg_dump version mismatch

If you need to use a `pg_dump` version different from the one you have at your system you could either [use nix](nix.md) or use docker

```bash
docker run postgres:9.2 pg_dump books > books.out
```

Or if you need to enter the password

```bash
docker run -v /path/to/dump:/dump -it postgres:12 bash
pg_dump books > /dump/books.out
```

fix(promtail): Improve docker log parsing to get a clean container name

 ```yaml
 scrape_configs:
  - job_name: docker
     docker_sd_configs:
       - host: unix:///var/run/docker.sock
         refresh_interval: 5s
     relabel_configs:
       - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'
    pipeline_stages:
      - static_labels:
          job: docker
 ```

fix(promtail): Improve journalctl log parsing to avoid duplicate logs

If you've set some systemd services that run docker-compose it's a good idea not to ingest them with promtail so as not to have duplicate log lines:

```yaml
scrape_configs:
  - job_name: journal
    journal:
      json: false
      max_age: 12h
      path: /var/log/journal
      labels:
        job: systemd-journal
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: unit
      - source_labels: ['__journal__hostname']
        target_label: hostname
      - source_labels: ['__journal_syslog_identifier']
        target_label: syslog_identifier
      - source_labels: ['__journal_transport']
        target_label: transport
      - source_labels: ['__journal_priority_keyword']
        target_label: level
    pipeline_stages:
      - drop:
          source: syslog_identifier
          value: docker-compose
```

feat(roadmap_adjustment): Introduce roadmap adjustment

Roadmap adjustment gathers the techniques to make and review plans in order to define the optimal path in terms of efficacy and efficiency.

Roadmap adjustment can be categorized by the next approaches:

- [Adjustment type](roadmap_adjustment.md#roadmap-adjustment-types)
- [Abstraction level](roadmap_adjustment.md#Roadmap-adjustments-by-abstraction-level)
- [Purpose](roadmap_adjustment.md#roadmap-adjustments-by-purpose)

Before you dive in here are some warnings

- Build your own proceses: Each of the adjustments defined below describe my curated process developed over the years, you can use them as a starting point to define what works for you or to get some ideas. Each of us is different and want to spend on this a different amount of time.

- Keep them simple: It's important for the proceses to be light enough that you want to actually do them, so you see it as a help instead of a burden. It's always better to do a small and quick ones rather than nothing at all. At the start of the process analyze yourself to assess how much energy and time do you have and decide which steps of the guides below you want to follow.

- Alive proceses: These adjustments have to reflect ourselves and our environment. As we change continuously, so will our adjustment proceses.

  I've gone for full blown adjustments of locking myself up for a week to not doing any for months. And that is just fine, these tools are there to help us only if we want to use them.

- Heavily orgmode oriented: This article heavily uses [orgmode](orgmode.md), my currently chosen [task tool](task_tools.md), but that doesn't mean that the concepts can be applicable to other tools.

feat(roadmap_adjustment#Roadmap adjustment types): Roadmap adjustment by types

There are three types of roadmap adjustment if we split them by process type:

- [Refinements](roadmap_adjustment.md#refinements): Clean up your system to represent the reality
- [Reviews](roadmap_adjustment.md#reviews): Gather insights about your environment and yourself
- [Plannings](roadmap_adjustment.md#plannings): Update your roadmap given the changes in your life

feat(roadmap_adjustment#Refinements): Refinements

The real trick to ensuring the trustworthiness of the whole time management system lies in regularly refreshing your thinking and your system from a more elevated perspective. That's impossible to do if your lists fall too far behind your reality. A good way to update the system is through periodic refinements.

At some point you may feel the need to clarify the larger outcomes, the long-term goals, the visions and principles that ultimately drive your decisions. I'd advice against taking this step until you can keep your everyday world under control, otherwise you may undermine your motivation and energy rather than enhance them. Once you feel that you have an abstraction level under control jump to the next. Keep in mind that abstraction conquests are not permanent, life may break havoc and make you loose control of the lower levels, it's good then to step down and tidy things up even if it means disregarding higher abstractions.

Sometimes refinements can be empowering, but they always are time and energy consuming, that's why we need to define well their purpose so that we can get the sweet spot of benefits against efforts invested.

We need a process to review each level of abstraction:

- [Step refinement](roadmap_adjustment.md#step-refinement)
- [Task refinement](roadmap_adjustment.md#task-refinement)
- [Project refinement](roadmap_adjustment.md#project-refinement)
- [Area refinement](roadmap_adjustment.md#area-refinement)

feat(roadmap_adjustment#Reviews): Reviews

Reviews are proceses to stop your daily life and do introspections to gather insights about your environment and yourself to better build an efficient and effective roadmap.

Reviews can be done at different levels of purpose, each level gives you different benefits.

- [Month review](roadmap_adjustment.md#month-review)
- [Trimester review](roadmap_adjustment.m#trimester-review)
- [Year review](roadmap_adjustment.m#year-review)

Reviews guidelines:

- Review approaches: In the past I used the [life logging](life_logging.md) tools to analyze the past in order to understand what I achieved and take it as a base to learn from my mistakes. It was useful when I needed the endorphines boost of seeing all the progress done. Once I assumed that progress speed and understood that we always do the best we can given how we are, I started to feel that the review process was too cumbersome and that it was holding me into the past.

  Nowadays I try not to look back but forward, analyze the present: how I feel, how's the environment around me, and how can I tweak both to fulfill my life goals. This approach leads to less reviewing of achievements and logs and more introspection, thinking and imagining. Which although may be slower to correct mistakes of the past, will surely make you live closer to the utopy.

  The reviews below then follow that second approach.
- Reviews as deadlines: Reviews can also be used as deadlines. Sometimes deadlines helps us get motivation and energy to achieve what we want if we feel low. But remember not to push yourself too hard. If deadlines do you more wrong than right, don't use them. All these tools are meant to help us, not to bring us down.

feat(roadmap_adjustment#Plannings): Plannings

Life planning can be done at different levels. All of them help you in different ways to reduce the mental load, each also gives you extra benefits that can't be gained by the others. Going from lowest to highest abstraction level we have:

- [Day plan](roadmap_adjustment.md#make-a-day-plan).
- Week plan.
- Month plan.
- Trimester plan.
- Year plan.

feat(roadmap_adjustment#Step refinement): Step refinement

The purpose is to make sure that the step description meets the next criteria:

- It still represents what needs to be done. Sometimes it's something that is already done, or that the circumstances have changed in a way that we need to rephrase the step.
- It's clear up to the point that you don't need to think anything to start working on it.

It can be done:

- When you create a new step.
- Each time you read a step and feel that it doesn't meet the criteria.

feat(roadmap_adjustment#Task refinement): Task refinement

It fulfills these purposes:

- Define the steps required to finish a task.
- Make sure that the task still reflects a real need.
- Make sure that there is always a refined next step to finish the task.
- Clean up all the done elements than don't add value.
- Ease the overwhelm feeling when faced with a daunting task.

When done well, you'll better understand what you need to do, it will prevent
you from wasting time at dead ends as you'll think before acting, and you'll
develop the invaluable skill of breaking big problems into smaller ones.

It can be done differently at different moments:

- When you create a new task:
  - Decide what do you want to achieve when the task is finished.
  - Create a descriptive task title.
  - Analyze the possible ways to arrive to that outcome. Try to assess different
    solutions before choosing one.
  - Create a list of [refined steps](#step-refinement) for each of them.

- When you finish a step, don't know how to go on so you need to look at the step lists and the next one is not refined enough:
  - Mark the done steps as done.
  - Do the [step refinement](#step-refinement) of the immediate next one.

- When you're working on the task and feel that it needs an update: It can be because:
  - You've been working for a while on steps of a task that are not defined in the plan and feel that you've passed several bifurcations that you want to investigate and are afraid to forget them. For example imagine that your task plan looks like this:

    ```orgmode
    - [ ] Do A
    - [ ] Do B
    ```

    But while working on A you've actually done:

    ```orgmode
    - [ ] Do A
      - [x] Do A.1
      - [ ] Do A.2
        - [x] Do A.2.1
        - [ ] Do A.2.2
        - [ ] Investigate A.2.3
      - [ ] Investigate A.3
    - [ ] Do B
    ```

    If you find yourself doing 'Do A.2.2' but are afraid of loosing 'Investigate A.2.3' and 'Investigate A.3', go back to the task plan and update it to meet the current state. There is no need to fill in the things that you've done. Only the ones that you still want to do.

  - When you realize that the circumstances have changed enough that you need to update the task step list or title.

  - When you need to switch context to another task: this is specially necessary when you are going to stop working on the task. You never know when you're going to be able to work again on it, so it's crucial to at least refine the next step. It's also a good moment to do some [task cleaning](#task-cleaning).

- When you read the title and need to take a look at the steps list to understand what is it about. Once you grasped the idea clarify the title.

- When the task step list has so many done items that you need to search for the next actionable step [requires some cleaning](#task-cleaning).

- When the task step list gets too complex: TBC

The refinement precision needs to be incremental. It doesn't make sense to have a perfect plan because you often don't have all the information required to make it well, and you'll surely need to adapt it. All time spent refining steps that are going to be discarded in plan adaptations, is wasted time.

feat(roadmap_adjustment#Task cleaning): Task cleaning

Marking steps as done make help you get an idea of the evolution of the task. It can also be useful if you want to do some kind of reporting. On the other hand, having a long list of done steps (specially if you have many levels of step indentation may make the finding of the next actionable step difficult. It's a good idea then to often clean up all done items.

If you don't care about the traceability of what you've done simply delete the done lines. If you do, until there is a more automated way:

- Archive the todo element.
- Undo the archive.
- Clean up the done items.

This way you have a snapshot of the state of the task in your archive.

feat(roadmap_adjustment#Project refinement): Project refinement

The purpose is to ensure that given the current circumstances:

- The project description represents the reality and is clear enough.
- The project roadmap defined by the task plan is the optimal path to reach the project outcome.

We can do it in two ways:

- [Rabbit hole refinement](#rabbit-hole-project-refinement)
- [Think outside the box refinement](#think-outside-the-box-project-refinement)

Rabbit hole project refinement:

This kind of refinement allows you to dig deeper in whichever path you're heading to. It's mechanical and require a limit level of creativity. It's then perfect to apply when you just finished doing a project's task.

- Read the task titles to make sure that they still make sense following the next guidelines:
  - If the task title doesn't give you enough information, read the task steps and then tweak the task title to make it clearer.
  - Mark done tasks as done and archive them.
- If you need create new tasks with the minimal refinement to register your idea.
- Change the order of the tasks to meet current priorities.
- Do a [task refinement](#task-refinement) for the most imminent one.

Think outside the box project refinement:

[Rabbit hole project refinement](#rabbit-hole-project-refinement) is the best way to reach the destination you're heading to. It may not be the optimal one though. As you have you're head deep into the rabbit hole it's easy to miss better alternative paths to reach the project objective.

It could be interesting to use techniques that help you discover these paths for example in a [weekly planning](#the-weekly-planning).

feat(roadmap_adjustment#Area planning): Area planning

The purpose is to ensure that the area roadmap is the optimal way to reach the area goal given the current circumstances. We do it by following the next steps:

- Check the goals of the area
- Think or write down what are the best ways to reach the goals without looking at the area's project or road map
- Adjust the previous ideas after reviewing the current road map and the future area projects
- Take the decision of what is the optimal way
- Adjust the roadmap (at project level) accordingly.

This can't be done on the frenzy of everyday as you're prone to fall into any rabbit hole you're headed to. This is the first refinement that needs it's own time and reflection. As projects don't change very often, it makes sense to do it as part of the [monthly planning](#the-monthly-planning).

feat(roadmap_adjustment#Roadmap adjustments by purpose): Roadmap adjustments by purpose

Given the level of control of your life you can do the next adjustments:

- [Survive the day](roadmap_adjustment.md#survive-the-day)
- [Survive the week](roadmap_adjustment.md#survive-the-week)
- [Ride the month](roadmap_adjustment.md#ride-the-month)

As you master a purpose level you will have more experience and tools to manage more efficiently your life at the same time that you have less stress and mental load due to the reduction of uncertainty. This new state in theory (if life lets you) will eventually give you the energy to jump to the next purpose levels.

feat(roadmap_adjustment#Survive the day): Survive the day

At this level you're with your eyes closed and only react when life throws stuff at you. You'll surely be surprised of what and how hard it hits you, so probably you won't be able to address them the best way. You just want it to stop. This adjustment level aims to let you handle those hits without missing the stuff you need to do.

This adjustment is split in the next parts:

- [Get used to work with simple tasks](roadmap_adjustment.md#get-used-to-work-with-simple-tasks)
- [Make a day plan](roadmap_adjustment.md#make-a-day-plan)
- [Follow the day plan](roadmap_adjustment.md#follow-the-day-plan)
- [Control your inbox](roadmap_adjustment.md#control-your-inbox)

Get used to work with simple tasks:

We'll start building a system that helps us not to die in agony at life aggressions with the spare energy left.

One way to do it is to choose the tools to manage your life. Start small, only trying to manage the [step](#step) and [task](#task) roadmap adjustments. [The simplest task manager](task_tools.md#the-simplest-task-manager) is a good start.

Make a day plan:

This plan defines at day level which tasks are you going to work on and schedules when are you going to address them. The goal is to survive the day. It's a good starting point if you forget to do tasks that need to be done in the day or if you miss appointments.

It's interesting to make your plan at the start of the day.

I follow the next steps:

- Clarify the state of the world
  - Get an idea of what you need to do by checking and cleaning:
    - Calendar events.
    - Your org agenda of the day
      - For each element decide if it needs to be in the agenda and refile it to the chosen destination.
    - The last day's plan.
    - The month objectives if you have them.
  - How much uninterrupted time you have between calendar events.
  - Your mental and physical state.
- Check if you can transition the `WAITING` tasks to `DOING` or `TODO`.
- Write the objectives of the day

To make it easy to follow I use a bash script that asks me to follow these steps.

Follow the day plan:

There are two tools that will help to follow the day plan:

- [The calendar event notification system](calendar_management.md#calendar-event-notification-system) to avoid spending mental load tracking when the next appointment starts and to reduce the chances of missing it.
- Periodic checks of the day plan: If you use the [pomodoro technique](task_tools.md#pomodoro), after each iteration check your day objectives and assess whether you're going to finish what you proposed yourself or if you need to tweak the task steps to do so.

Control your inbox:

The [Inbox](task_tools.md#inbox) is a nasty daemon that loves to get out of control. You need to develop your inbox cleaning skills and proceses up to the point that you're sure that the important stuff tracked where it should be tracked. So far aiming to have a 0 element inbox is unrealistic though, at least for me.

feat(roadmap_adjustment#Survive the week): Survive the week

At this level you're able to open your myopic eyes, so you start to guess what life throws at you. This may be enough to be able to gracefully handle some of the small stuff. The fast ones will still hit you though as you still don't have too much time or definition to react.

This adjustment is whatever you need to do to get your head empty again and get oriented for the next 9 days. It's split in the next phases:

- [Week plan](#week-plan)

feat(roadmap_adjustment#Week plan): Week plan

No matter how good our intentions or system may be, you're going to take in more opportunities than you can handle. The more efficient you become, the more ground you'll try to grasp. You're going to have to learn to say no faster, and to more things, in order to stay afloat and comfortable. Having some dedicated time in the week to at least get up to the project level of thinking goes a long way towards making that easier.

The plan defines at a 9 day time scale which projects are you going to work on. It's the next roadmap level to address a group of tasks. The goal changes from surviving the day to start planning your life. It's a good starting point if you are comfortable working with the pomodoro, task and day plans, and want to start deciding where you're heading to.

Make your plan at meaningful days both to make it more effective and to make it more difficult to skip it. Maybe you can do it at the start of the week. I personally do it on Thursdays because it's when I have more information about the weekend events and I have some free time.

I follow the next steps:

- Clean your agenda for the next 9 days: Refiling or rescheduling items as you need. If you are using your calendar well you shouldn't need to do any change, just load in your mind the things you are meant to do.

- If you're already at the ride the month adjustment:
  - Refine your month objective plans. For each objective decide the tasks/projects to be worked on and refactor them in the roadmap section of the `todo.org`.

When doing the plan try to minimize the number of tasks and calendar appointments so as not to get overwhelmed. It's better to eventually fall short on tasks, than never reaching your goal.

To make it easy to follow I use a bash script that asks me to follow these steps.

feat(roadmap_adjustment#Ride the month): Ride the month

At this level you not only had time to polish your roadmap adjustment skills, but also had the chance the buy some glasses for your myopic eyes! The increase in definition and time to react to what life throws at you lets to now get almost no hits `\\ ٩( ᐛ )و //`.

Now that you have stopped worrying for your integrity, you start to hear a little voice from within yourself that gives you reports from your body and brain about what worries you, what makes you happy, what makes you mad, ... Has it been yelling all this time?  `(¬º-°)¬`.

At this adjustment level we'll start using the next abstraction level, the [objectives] is whatever you need to do to get your head empty again and get oriented for the next 9 days. It's split in the next phases:

Personal integrity review:

The objectives of the personal integrity review are:

- Identify how you feel and what worries you.
- Identify strong and weak points on your systems.
- Identify deadlines.

The objectives aren't to:

- Assess the progress in your objectives and decisions.

Doing this adjustment once per month is a good frequency given the speed of life change and the efforts required to do it.

It's interesting to do these reviews on meaningful days such as the last day of the month. Usually we don't have enough flexibility in our life to do it exactly that day, so schedule it the closest you can to that date. It's a good idea to do both the review and the planning on the same day.

As it's a process we're going to do very often, we need it to be relatively quick and easy so as not to invest too much time or energies on it. Keep in mind that this should be an analysis at month level in terms of abstraction, here is not the place to ask yourself if you're fulfilling your life goals. As such, you don't need that much time either, just identifying the top things that pop out of your mind are more than enough.

Personal integrity review tools:

With a new level of abstraction we need tools:

- The *Review box*: It's the place you leave notes for yourself when you do the review, it can be for example a physical folder or a computer text file. I use a file called `review_box.org`. It's filled after the refile of review elements captured in the rest of my inboxes.

- The *Month checks*: It's a list of elements you want to periodically check its evolution throughout time. It's useful to analyze the validity of theories or new processes. I use the heading `Month checks` in a file called `life_checks.org`.

- The *Objective list*: It's a list of elements you want to focus your energies on. It should be easy to consult. I started with a list per month in a file called `objectives.org` and then migrated to the [life path document](#the-life-path-document).

Personal integrity review phases:

We'll divide the review process in these phases:

- [Prepare](#survive-review-prepare)
- [Discover](#survive-review-discover)
- [Analyze](#survive-review-analyze)
- [Decide](#survive-review-decide)

To record the results of the review create the file `references/reviews/YYYY_MM.org`, where the month is the one that is ending with the following template:

```org
* Discover
* Analyze
* Decide
```

Personal integrity review prepare:

It's important that you prepare your environment for the review. You need to be present and fully focused on the process itself. To do so you can:

- Make sure you don't get interrupted:
    - Check your task manager tools to make sure that you don't have anything urgent to address in the next hour.
    - Disable all notifications
- Set your analysis environment:
    - Put on the music that helps you get *in the zone*. I found it meaningful to select the best new music I've discovered this month.
    - Get all the things you may need for the review:
        - The checklist that defines the process of your review (this document in my case).
        - Somewhere to write down the insights.
        - Your *Review box*.
        - Your *life path document*.
    - Remove from your environment everything else that may distract you
        - Close all windows in your laptop that you're not going to use

Personal integrity review discover:

Try not to, but if you think of decisions you want to make that address the elements you're discovering, write them down in the `Decide` section of your review document.

There are different paths to discover actionable items:

- Analyze what is in your mind: Take 10 minutes to answer to the next questions (you don't need to answer them all):

  - Where is your mind these days?
  - What did drain your energy or brought you down emotionally this last month?
  - What worries you right now?
  - What did help you most this last month?
  - What did you enjoy most this last month?

  Notice that we do not need to review our life logging tools (diary, task manager, ...) to answer these questions. This means that we're doing an analysis of what is in our minds right now, not throughout the month. It's flawed but as we do this analysis often, it's probably fine. We add more importance to the latest events in our life anyway.

- Empty the elements you added to the `review box`.

- Process your `Month checks`. For each of them:
  - Think of whether you've met the check.
  - If you need, add action elements in the `Discover` section of the review.

- Process your `Month objectives`. For each of them:
  - Think of whether you've met the objective.
  - If you need, add action elements in the `Discover` section of the review.
  - If you won't need the objective in the next month, archive it.

Personal integrity review analyze:

Of all the identified elements we need to understand them better to be able to choose the right path to address them. These elements are usually representations of a state of our lives that we want to change.

- For each of them if you can think of an immediate solution to address the element add it to the `Decide` section otherwise add them to the `Analyze`.
- Order the elements in `Analyze` in order of priority

Then allocate 20 minutes to think about them. Go from top to bottom transitioning once you feel it's analyzed enough. You may not have time to analyze all of them. That's fine. Answering the next questions may help you:

- What defines the state we want to change?
- What are the underlying forces in your life that made you reach that state?
- To what state you want to transition to?
- What is the easiest way to reach that destination?

For the last question you can resort to:

- Habits change
- Projects change: start or stop doing a series of tasks.
- Roadmap change

Once you have analyzed an element copy all the decisions you've made in the `Decide` section of your review document.

Personal integrity review decide:

Once you have a clear definition of the current state, the new and how to reach it you need to process each of the decisions you've identified through the review process so that they are represented in your life management system, otherwise you won't arrive the desired state. To do so analyze what is the best way to process each of the elements you have written in the `Decide` section. It can be one or many of the following:

- Identify hard deadlines: Add a warning days before the deadline to make sure you're reminded until it's done.
- Create or tweak a habit
- Tweak your project and tasks definitions
- Create *checks* to make sure that they are not overseen.
- Create objectives that will be checked in the next reviews (weekly and monthly).
- Create [Anki](anki.md) cards to keep the idea in your mind.

Finally:

- Check the last month checks and complete this month ones.
- Pat yourself in the shoulder as you've finished the review ^^.

feat(time_management_abstraction_levels): Introduce the time management abstraction levels

To be able to manage the complexity of the life roadmap we can use models for different levels of abstraction with different purposes. In increasing level of abstraction:

- [Step](time_management_abstraction_levels.md#step)
- [Task](time_management_abstraction_levels.md#task)
- [Project](time_management_abstraction_levels.md#project)
- [Area](time_management_abstraction_levels.md#area)
- [Goal](time_management_abstraction_levels.md#goal)
- [Vision](time_management_abstraction_levels.md#vision)
- [Purpose and principles](time_management_abstraction_levels.md#purpose-and-principles)

**Step**

Is the smallest unit in our model, it's a clear representation of an action you need to do. It needs to fit a phrase and usually starts in a verb. The scope of the action has to be narrow enough so that you can follow it without investing thinking energies. In orgmode they are represented as checklists:

```orgmode
- [ ] Go to the green grocery store
```

Sometimes is useful to add more context to the steps, you can use an indented list. For example:

```orgmode
- [ ] Call dad
  - [2023-12-11] Tried but he doesn't pick up
  - [2023-12-12] He told me to call him tomorrow
```

This is useful when you update waiting tasks.

There are cases where it's also interesting to record when you've completed a step, you can append the date at the end.

```orgmode
- [x] Completed step [2023-12-12]
```

**Task**

Model an action that is defined by a list of steps that need to be completed. It has two possible representations in orgmode:

- TODO items with checklists:

  ```orgmode
  * TODO Refill the fridge
    - [ ] Check what's left in the fridge
    - [ ] Make a list of what you want to buy
    - [ ] Go to the green grocery store
  ```

- Nested steps checklists. You may realize that to make the list of what you want to buy you first want to think of what you want to eat. You could then:

  ```orgmode
  - [ ] Make a list of what you want to buy
    - [ ] Think what you want to eat
    - [ ] Write down the list
  ```

Nested lists can also be found inside todo items:

```orgmode
* TODO Refill the fridge
  - [ ] Check what's left in the fridge
  - [ ] Make a list of what you want to buy
    - [ ] Think what you want to eat
    - [ ] Write down the list
  - [ ] Go to the green grocery store
```

This is fine as long as it's manageable, once you start seeing many levels of indentation is a great sign that you need to divide your task in different tasks.

*Adding more context to the task*

Sometimes a task title is not enough. You need to register more context to be able to deal with the task. In those cases we need the task to be represented as a todo element. Between the title and the step list we can add the description.

```orgmode
* TODO Task title
  This is the description of the task to add more context

  - [ ] Step 1
  - [ ] Step 2
```

If you need to use a list in the context, add a Steps section below to avoid errors on the editor.

```orgmode
* TODO Task title
  This is the description of the task to add more context:

  - Context 1
  - Context 2

  Steps:

  - [ ] Step 1
  - [ ] Step 2
```

*Preventing the closing of a task without reading the step list*

If you manage your tasks from an agenda or only reading the task title, there may be cases where you feel that the task is done, but if you see the step list you may realize that there is still stuff to do. A measure that can prevent this case is to add a mark in the task title that suggest you to check the steps. For example:

```orgmode
* TODO Task title (CHECK)
  - [ ] ...
```

This is specially useful on recurring tasks that have a defined workflow that needs to be followed, or on tasks that have a defined validation criteria.

**Project**

Model an action that gathers a list of tasks towards a common greater outcome.

```orgmode
* TODO Guarantee you eat well this week
** TODO Plan what you want to eat
   - [ ] ...
** TODO Refill the fridge
   - [ ] ...
** TODO Batch cook for the week
   - [ ] ...
```

**Area**

Model a group of projects and tasks that follow the same interest, roles or accountabilities. These are not things to finish but rather to use as criteria for analyzing, defining a specific aspect of your life and to prioritize the projects to reach a higher outcome. We'll use areas to maintain balance and sustainability on our responsibilities as we operate in the world.

I use specific orgmode files with the next structure:

```orgmode

Objectives:
- [ ] ...

* Area roadmap
  ...
* Area backlog
  ...
```

To find them easily I add a section in the `index.org` of the documentation repository. For example:

```orgmode
* Areas
** [[file:./happiness.org][Happiness]]
*** Project 1 of happiness
** [[file:./activism.org][Activism]]
** [[file:./efficiency.org][Efficiency]]
** [[file:./work.org][Work]]
```

**Objective**

An [objective] is an idea of the future or desired result that a person or a group of people envision, plan, and commit to achieve.

**Strategy**

[Strategy](strategy.md) is a general plan to achieve one or more long-term or overall objectives under conditions of uncertainty. They can be used to define the direction of the [areas](#area)

**Tactic**

A [tactic](https://en.wikipedia.org/wiki/Tactic_(method)) is a conceptual action or short series of actions with the aim of achieving a short-term goal. This action can be implemented as one or more specific tasks.

**Life path**

Models the evolution of the principle and objectives throughout time. It's the highest level of abstraction of my life management system so far, and probably will be refactored soon in other documents.

The structure of the [orgmode](orgmode.md) document is as follows:

```orgmode
* Life path
** {year}
*** Principles of {season} {year}
    {Notes on the season}
    - Principle 1
    - Principle 2
    ...

**** Objectives of {month} {year}
     - [-] Objective 1
       - [X] SubObjective 1
       - [ ] SubObjective 2
     - [ ] Objective 2
     - [ ] ...
```

Where the principles are usually links to principle documents and the objectives links to tasks.

**Goal**

Model what you want to be experiencing in various areas of your life one or two years from now. A `goals.org` file with a list of headings may work.

**Vision**

Aggregate group of goals under a three to five year time span common outcome. They help you think about bigger categories: life strategies, environmental trends, political context, career and lifestyle transition circumstances. I haven't reached this level of abstraction yet, so I'm not sure how to implement it.

**Purpose and principles**

The purpose defines the reason and meaning of your existence, principles define your morals, the parameters of action and the criteria for excellence of conduct. These are the core definition of what you really are. Visions, goals, objectives, projects and tasks derive and lead towards them.

As we increase in the level of abstraction we need more time and energy (both mental and willpower) to adjust the path, it may also mean that the invested efforts so far are not aligned with the new direction, so we may need to throw away some of the advances made. That's why we need to support those changes with a higher levels of analysis and thought.

feat(vim_autosave): Autosave in vim

To automatically save your changes in NeoVim you can use the [auto-save](https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#%EF%B8%8F-configuration) plugin.

It has some nice features

- Automatically save your changes so the world doesn't collapse
- Highly customizable:
  - Conditionals to assert whether to save or not
  - Execution message (it can be dimmed and personalized)
  - Events that trigger auto-save
- Debounce the save with a delay
- Hook into the lifecycle with autocommands
- Automatically clean the message area

**[Installation ](https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#-installation)**
```lua
{
  "okuuva/auto-save.nvim",
  cmd = "ASToggle", -- optional for lazy loading on command
  event = { "InsertLeave", "TextChanged" }, -- optional for lazy loading on trigger events
  opts = {
    -- your config goes here
    -- https://github.com/okuuva/auto-save.nvim?tab=readme-ov-file#%EF%B8%8F-configuration
    execution_message = {
      enabled = false,
    },
  },
},
```

feat(vim_plugin_development): Record a good example of a simple plugin

Check [org-checkbox](https://github.com/massix/org-checkbox.nvim/blob/trunk/lua/orgcheckbox/init.lua) to see a simple one

feat(wordpress#Interact with Wordpress with Python): Interact with Wordpress with Python

Read [this article](https://robingeuens.com/blog/python-wordpress-api/)

feat(yt-dlp): Install yt-dlp

```bash
pipx install --pip-args='--pre' yt-dlp

```

feat(zalando_postgres_operator): …
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

4 participants