{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":3356358,"defaultBranch":"kernelci","name":"linux","ownerLogin":"riteshharjani","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2012-02-04T23:56:03.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1408924?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1709722428.0","currentOid":""},"activityList":{"items":[{"before":null,"after":"affbcaa016e8585df838f393aa80029b8f195e38","ref":"refs/heads/ext2-iomap-lsfmm-rfcv2","pushedAt":"2024-03-06T10:53:48.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext2: Implement seq counter for validating cached iomap\n\nThere is a possibility of following race with iomap during\nwritebck -\n\nwrite_cache_pages()\n cache extent covering 0..1MB range\n write page at offset 0k\n\t\t\t\t\ttruncate(file, 4k)\n\t\t\t\t\t drops all relevant pages\n\t\t\t\t\t frees fs blocks\n\t\t\t\t\tpwrite(file, 4k, 4k)\n\t\t\t\t\t creates dirty page in the page cache\n writes page at offset 4k to a stale block\n\nThis race can happen because iomap_writepages() keeps a cached extent mapping\nwithin struct iomap. While write_cache_pages() is going over each folio,\n(can cache a large extent range), if a truncate happens in parallel on the\nnext folio followed by a buffered write to the same offset within the file,\nthis can change logical to physical offset of the cached iomap mapping.\nThat means, the cached iomap has now become stale.\n\nThis patch implements the seq counter approach for revalidation of stale\niomap mappings. i_blkseq will get incremented for every block\nallocation/free. Here is what we do -\n\nFor ext2 buffered-writes, the block allocation happens at the\n->write_iter time itself. So at writeback time,\n1. We first cache the i_blkseq.\n2. Call ext2_get_blocks(, create = 0) to get the no. of blocks\n already allocated.\n3. Call ext2_get_blocks() the second time with length to be same as\n the no. of blocks we know were already allocated.\n4. Till now it means, the cached i_blkseq remains valid as no block\n allocation has happened yet.\nThis means the next call to ->map_blocks(), we can verify whether the\ni_blkseq has raced with truncate or not. If not, then i_blkseq will\nremain valid.\n\nIn case of a hole (could happen with mmaped writes), we only allocate\n1 block at a time anyways. So even if the i_blkseq value changes right\nafter, we anyway need to allocate the next block in subsequent\n->map_blocks() call.\n\nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"ext2: Implement seq counter for validating cached iomap"}},{"before":null,"after":"426173ce2dbba222d0620020894c65837772b172","ref":"refs/heads/dev-ext4","pushedAt":"2024-03-06T04:53:02.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: Add more info to ext4 resize\n\nWe are anyway doing an ext4_msg at the start and end of\next4_resize_fs(). Move the start message after all the useless returns\nfrom where the resize msgs make sense.\n\nAlso add more info in the same resize msg about the descriptor blocks\nand resize_fs/meta_bg feature state.\n\nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"ext4: Add more info to ext4 resize"}},{"before":null,"after":"c8318dd2a63c230e5e731bc14750a466d7891ddc","ref":"refs/heads/notmuch-cover.1708709155.git.john@groves.net","pushedAt":"2024-03-02T21:07:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"famfs: Add Kconfig and Makefile plumbing\n\nAdd famfs Kconfig and Makefile, and hook into fs/Kconfig and fs/Makefile\n\nSigned-off-by: John Groves ","shortMessageHtmlLink":"famfs: Add Kconfig and Makefile plumbing"}},{"before":"4640e2be3920168f6b26512466562accb783423a","after":"1030252236ae424e4cf7b75bb16f76f6364e9433","ref":"refs/heads/notmuch-20240302181755.9192-1-shikemeng@huaweicloud.com","pushedAt":"2024-03-02T20:28:59.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: initialize sbi->s_freeclusters_counter before use in kunit test\n\nFix warnning that sbi->s_freeclusters_counter is used before\ninitialization.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: initialize sbi->s_freeclusters_counter before use in kunit test"}},{"before":"f22f705c459e3249d8324b7695e4fa5ce867674e","after":"4640e2be3920168f6b26512466562accb783423a","ref":"refs/heads/notmuch-20240302181755.9192-1-shikemeng@huaweicloud.com","pushedAt":"2024-03-02T20:20:32.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"Merge tag 'xfs-6.8-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux\n\nPull xfs fix from Chandan Babu:\n \"Drop experimental warning message when mounting an xfs filesystem on\n an fsdax device. We now consider xfs on fsdax to be stable\"\n\n* tag 'xfs-6.8-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:\n xfs: drop experimental warning for FSDAX","shortMessageHtmlLink":"Merge tag 'xfs-6.8-fixes-4' of git://git.kernel.org/pub/scm/fs/xfs/xf…"}},{"before":null,"after":"f22f705c459e3249d8324b7695e4fa5ce867674e","ref":"refs/heads/notmuch-20240302181755.9192-1-shikemeng@huaweicloud.com","pushedAt":"2024-03-02T20:15:45.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: correct some stale comment of criteria\n\nWe named criteria with CR_XXX, correct stale comment to criteria with\nraw number.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: correct some stale comment of criteria"}},{"before":null,"after":"eca99371abc77add6a3f25d829df7d1cc1a2e789","ref":"refs/heads/xfs-block-atomic-v4","pushedAt":"2024-02-24T15:43:32.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"nvme: Ensure atomic writes will be executed atomically\n\nThere is no dedicated NVMe atomic write command (which may error for a\ncommand which exceeds the controller atomic write limits).\n\nAs an insurance policy against the block layer sending requests which\ncannot be executed atomically, add a check in the queue path.\n\n#jpg: some rewrite\n\nSigned-off-by: Alan Adamson \nSigned-off-by: John Garry ","shortMessageHtmlLink":"nvme: Ensure atomic writes will be executed atomically"}},{"before":null,"after":"9e716d325090913e86881adce749ee351757232e","ref":"refs/heads/ext4-iomap-v3","pushedAt":"2024-02-21T18:24:22.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: enable large folio for regular file with iomap buffered IO path\n\nAfter we convert buffered IO path to iomap for regular files, we can\nenable large foilo for them together, that should be able to bring a lot\nof performance gains for large IO.\n\nSigned-off-by: Zhang Yi ","shortMessageHtmlLink":"ext4: enable large folio for regular file with iomap buffered IO path"}},{"before":"54ad984b5c6176e8e1209024b717030cb75c6c9f","after":"15e7b48cbf2af2424d794e3cffd1e49d4144d995","ref":"refs/heads/ext2-iomap-rebase-v1","pushedAt":"2024-02-21T18:13:30.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"wip: map multiblocks for writepage\n\nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"wip: map multiblocks for writepage"}},{"before":null,"after":"54ad984b5c6176e8e1209024b717030cb75c6c9f","ref":"refs/heads/ext2-iomap-rebase-v1","pushedAt":"2024-02-21T17:51:40.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"wip: map multiblocks for writepage\n\nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"wip: map multiblocks for writepage"}},{"before":null,"after":"6a84476e17f2f05e74666be0d1abd99162f348fd","ref":"refs/heads/xfs-atomic-write-v3","pushedAt":"2024-02-12T16:29:18.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"fs: xfs: Set FMODE_CAN_ATOMIC_WRITE for FS_XFLAG_ATOMICWRITES set\n\nFor when an inode is enabled for atomic writes, set FMODE_CAN_ATOMIC_WRITE\nflag.\n\nSigned-off-by: John Garry ","shortMessageHtmlLink":"fs: xfs: Set FMODE_CAN_ATOMIC_WRITE for FS_XFLAG_ATOMICWRITES set"}},{"before":null,"after":"85683348a70bad64849fe524bdc2ed7d93c84f52","ref":"refs/heads/notmuch-20231126124720.1249310-1-hch@lst.de","pushedAt":"2023-11-26T12:56:34.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"iomap: map multiple blocks at a time\n\nThe ->map_blocks interface returns a valid range for writeback, but we\nstill call back into it for every block, which is a bit inefficient.\n\nChange xfs_writepage_map to use the valid range in the map until the end\nof the folio or the dirty range inside the folio instead of calling back\ninto every block.\n\nNote that the range is not used over folio boundaries as we need to be\nable to check the mapping sequence count under the folio lock.\n\nSigned-off-by: Christoph Hellwig ","shortMessageHtmlLink":"iomap: map multiple blocks at a time"}},{"before":"0ce13bfd5302e1dc1c1d7e545df94f553be3e377","after":"78692124c54c8cabe8b08ea35b169de04b07d801","ref":"refs/heads/jan-ext4-dio-iomap-fix","pushedAt":"2023-10-12T07:15:49.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: Properly sync file size update after O_SYNC direct IO\n\nGao Xiang has reported that on ext4 O_SYNC direct IO does not properly\nsync file size update and thus if we crash at unfortunate moment, the\nfile can have smaller size although O_SYNC IO has reported successful\ncompletion. The problem happens because update of on-disk inode size is\nhandled in ext4_dio_write_iter() *after* iomap_dio_rw() (and thus\ndio_complete() in particular) has returned and generic_file_sync() gets\ncalled by dio_complete(). Fix the problem by handling on-disk inode size\nupdate directly in our ->end_io completion handler.\n\nReferences: https://lore.kernel.org/all/02d18236-26ef-09b0-90ad-030c4fe3ee20@linux.alibaba.com\nReported-by: Gao Xiang \nSigned-off-by: Jan Kara ","shortMessageHtmlLink":"ext4: Properly sync file size update after O_SYNC direct IO"}},{"before":null,"after":"0ce13bfd5302e1dc1c1d7e545df94f553be3e377","ref":"refs/heads/jan-ext4-dio-iomap-fix","pushedAt":"2023-10-12T07:14:34.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"iomap: Add per-block dirty state tracking to improve performance\n\nWhen filesystem blocksize is less than folio size (either with\nmapping_large_folio_support() or with blocksize < pagesize) and when the\nfolio is uptodate in pagecache, then even a byte write can cause\nan entire folio to be written to disk during writeback. This happens\nbecause we currently don't have a mechanism to track per-block dirty\nstate within struct iomap_folio_state. We currently only track uptodate\nstate.\n\nThis patch implements support for tracking per-block dirty state in\niomap_folio_state->state bitmap. This should help improve the filesystem\nwrite performance and help reduce write amplification.\n\nPerformance testing of below fio workload reveals ~16x performance\nimprovement using nvme with XFS (4k blocksize) on Power (64K pagesize)\nFIO reported write bw scores improved from around ~28 MBps to ~452 MBps.\n\n1. \n[global]\n\tioengine=psync\n\trw=randwrite\n\toverwrite=1\n\tpre_read=1\n\tdirect=0\n\tbs=4k\n\tsize=1G\n\tdir=./\n\tnumjobs=8\n\tfdatasync=1\n\truntime=60\n\tiodepth=64\n\tgroup_reporting=1\n\n[fio-run]\n\n2. Also our internal performance team reported that this patch improves\n their database workload performance by around ~83% (with XFS on Power)\n\nReported-by: Aravinda Herle \nReported-by: Brian Foster \nSigned-off-by: Ritesh Harjani (IBM) \nReviewed-by: Darrick J. Wong ","shortMessageHtmlLink":"iomap: Add per-block dirty state tracking to improve performance"}},{"before":null,"after":"6e46c5eac75533f8ca2e7446bddedc9560a032fc","ref":"refs/heads/xfs-atomic-write-v1","pushedAt":"2023-10-04T11:14:08.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"nvme: Support atomic writes\n\nSupport reading atomic write registers to fill in request_queue\nproperties.\n\nUse following method to calculate limits:\natomic_write_max_bytes = flp2(NAWUPF ?: AWUPF)\natomic_write_unit_min = logical_block_size\natomic_write_unit_max = flp2(NAWUPF ?: AWUPF)\natomic_write_boundary = NABSPF\n\nSigned-off-by: Alan Adamson \nSigned-off-by: John Garry ","shortMessageHtmlLink":"nvme: Support atomic writes"}},{"before":null,"after":"c36cdd0721f4e1306d69feff34365c1d11d5ac56","ref":"refs/heads/notmuch-20230919201532.310085-1-shikemeng@huaweicloud.com","pushedAt":"2023-09-27T06:01:51.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: run mballoc test with different layouts setting\n\nUse KUNIT_CASE_PARAM to run mballoc test with different layouts setting.\n\nSigned-off-by: Kemeng Shi \nReviewed-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"ext4: run mballoc test with different layouts setting"}},{"before":null,"after":"3b878bc4805dd97f67ee17c6450059a4a7f2b261","ref":"refs/heads/ext2_dir_buffer_cache_rfc_wip1","pushedAt":"2023-09-26T17:20:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext2: everything else to make it work\n\next2_unlink, ext2_rmdir, ext2_set_link, ext2_find_entry, ext2_delete_entry, ext2_dotdot etc.\nall those functions are converted to use buffer cache in this.\n\nI tested some basic functionalities like\n\nmkdir, rmdir, ls -al, mkdir duplicate dir, unmount & mount.\nThe basic stuff is all working.\n\nNote this still does have a functionality to grow the directories.\nI would like to first maybe use this for checking ext2 iomap conversion\nconverting full dir to buffer cache conversion.\n\nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"ext2: everything else to make it work"}},{"before":null,"after":"b5a6e0422a4b075b2c1c2620c3f0555495698656","ref":"refs/heads/reviews-ext4-hybrid-lun","pushedAt":"2023-09-20T03:26:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: optimize metadata allocation for hybrid LUNs\n\nWith LVM it is possible to create an LV with SSD storage at the\nbeginning of the LV and HDD storage at the end of the LV, and use that\nto separate ext4 metadata allocations (that need small random IOs)\nfrom data allocations (that are better suited for large sequential\nIOs) depending on the type of underlying storage. Between 0.5-1.0% of\nthe filesystem capacity would need to be high-IOPS storage in order to\nhold all of the internal metadata.\n\nThis would improve performance for inode and other metadata access,\nsuch as ls, find, e2fsck, and in general improve file access latency,\nmodification, truncate, unlink, transaction commit, etc.\n\nThis patch split largest free order group lists and average fragment\nsize lists into other two lists for IOPS/fast storage groups, and\ncr 0 / cr 1 group scanning for metadata block allocation in following\norder:\n\nif (allocate metadata blocks)\n if (cr == 0)\n try to find group in largest free order IOPS group list\n if (cr == 1)\n try to find group in fragment size IOPS group list\n if (above two find failed)\n fall through normal group lists as before\nif (allocate data blocks)\n try to find group in normal group lists as before\n if (failed to find group in normal group && mb_enable_iops_data)\n try to find group in IOPS groups\n\nNon-metadata block allocation does not allocate from the IOPS groups\nif non-IOPS groups are not used up.\n\nAdd for mke2fs an option to mark which blocks are in the IOPS region\nof storage at format time:\n\n -E iops=0-1024G,4096-8192G\n\nso the ext4 mballoc code can then use the EXT4_BG_IOPS flag in the\ngroup descriptors to decide which groups to allocate dynamic\nfilesystem metadata.\n\nSigned-off-by: Bobi Jam v3: add sysfs mb_enable_iops_data to enable data block allocation\n from IOPS groups.\nv1->v2: for metadata block allocation, search in IOPS list then normal\n list.","shortMessageHtmlLink":"ext4: optimize metadata allocation for hybrid LUNs"}},{"before":null,"after":"e1ee6db7734e95cc76d52b5cdf05d35b9dcc2b31","ref":"refs/heads/jbd2-folio-problem-2","pushedAt":"2023-09-07T13:09:30.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"buffer: Fix definition of bh_offset() for struct buffer_head\n\nThe buffer head infrastructure is being transitioned from page based to\nfolio based (d685c668b0695df: buffer: add b_folio as an alias of\nb_page).\n\nNow, jbd2_alloc() can allocate a buffer from kmem cache when the\nbuffer_size is < PAGE_SIZE. (for e.g. 1k blocksize on 4k pagesize).\n\nThen when we save this buffer info inside buffer_head, we use\nIn folio_set_bh() we set\n\tbh->b_folio = folio;\n\tif (!highmem)\n\t bh->b_data = folio_address(folio) + offset;\n\nSo far all good. However, while using this buffer's b_data, we use\nbh_offset() or offset_in_page(), which assumes the buffer to be of\na PAGE_SIZE.\n\nThis patch fixes the definition of bh_offset() and make use of\nbh_offset() instead of offset_in_page() at places in fs/jbd2 and\nfs/reiserfs\n\nWhile we are at it, this patch converts this to use folio APIs instead.\n\nReported-by: Zorro Lang \nTested-by: Ritesh Harjani (IBM) \nSigned-off-by: Matthew Wilcox (Oracle) \nSigned-off-by: Ritesh Harjani (IBM) ","shortMessageHtmlLink":"buffer: Fix definition of bh_offset() for struct buffer_head"}},{"before":null,"after":"01bc4633b273add0cf82058c43ec81dd7656a4d7","ref":"refs/heads/jbd2-folio-problem-1","pushedAt":"2023-09-05T14:30:52.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"Revert \"jbd2: use a folio in jbd2_journal_write_metadata_buffer()\"\n\nThis reverts commit 8147c4c4546f9f05ef03bb839b741473b28bb560.","shortMessageHtmlLink":"Revert \"jbd2: use a folio in jbd2_journal_write_metadata_buffer()\""}},{"before":"2dde18cd1d8fac735875f2e4987f11817cc0bc2c","after":"c2ed2d84a9914d2fbe94935d0b052fc78c2f92fa","ref":"refs/heads/notmuch-20230826155028.4019470-1-shikemeng@huaweicloud.com","pushedAt":"2023-08-28T06:49:19.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: run mballoc test with different layouts setting\n\nUse KUNIT_CASE_PARAM to run mbalaloc test with different layouts setting.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: run mballoc test with different layouts setting"}},{"before":null,"after":"2dde18cd1d8fac735875f2e4987f11817cc0bc2c","ref":"refs/heads/notmuch-20230826155028.4019470-1-shikemeng@huaweicloud.com","pushedAt":"2023-08-28T05:15:29.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"Linux 6.5","shortMessageHtmlLink":"Linux 6.5"}},{"before":null,"after":"009aa3ba06d632b900c9792b4fb0758bcce0339a","ref":"refs/heads/review-xfs-cpuhp","pushedAt":"2023-08-25T09:34:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"xfs: remove cpu hotplug hooks\n\nThere are no users of the cpu hotplug hooks in xfs now, so remove the\ninfrastructure.\n\nSigned-off-by: Darrick J. Wong \nReviewed-by: Dave Chinner ","shortMessageHtmlLink":"xfs: remove cpu hotplug hooks"}},{"before":null,"after":"a7981f5501dbfdcc340ebe60de9ad0ea289e0de9","ref":"refs/heads/bobi-mballoc","pushedAt":"2023-08-03T09:02:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: optimize metadata allocation for hybrid LUNs\n\nWith LVM it is possible to create an LV with SSD storage at the\nbeginning of the LV and HDD storage at the end of the LV, and use that\nto separate ext4 metadata allocations (that need small random IOs)\nfrom data allocations (that are better suited for large sequential\nIOs) depending on the type of underlying storage. Between 0.5-1.0% of\nthe filesystem capacity would need to be high-IOPS storage in order to\nhold all of the internal metadata.\n\nThis would improve performance for inode and other metadata access,\nsuch as ls, find, e2fsck, and in general improve file access latency,\nmodification, truncate, unlink, transaction commit, etc.\n\nThis patch split largest free order group lists and average fragment\nsize lists into other two lists for IOPS/fast storage groups, and\ncr 0 / cr 1 group scanning for metadata block allocation in following\norder:\n\ncr 0 on largest free order IOPS group list\ncr 1 on average fragment size IOPS group list\ncr 0 on largest free order non-IOPS group list\ncr 1 on average fragment size non-IOPS group list\ncr >= 2 perform the linear search as before\n\nNon-metadata block allocation does not allocate from the IOPS groups.\n\nAdd for mke2fs an option to mark which blocks are in the IOPS region\nof storage at format time:\n\n -E iops=0-1024G,4096-8192G\n\nso the ext4 mballoc code can then use the EXT4_BG_IOPS flag in the\ngroup descriptors to decide which groups to allocate dynamic filesystem\nmetadata.\n\nSigned-off-by: Bobi Jam ","shortMessageHtmlLink":"ext4: optimize metadata allocation for hybrid LUNs"}},{"before":null,"after":"7cba00d8b6bd97f819e2908c96898ecc1bd85798","ref":"refs/heads/notmuch-20230724121059.11834-1-libaokun1@huawei.com","pushedAt":"2023-07-25T11:20:32.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: avoid overlapping preallocations due to overflow\n\nLet's say we want to allocate 2 blocks starting from 4294966386, after\npredicting the file size, start is aligned to 4294965248, len is changed\nto 2048, then end = start + size = 0x100000000. Since end is of\ntype ext4_lblk_t, i.e. uint, end is truncated to 0.\n\nThis causes (pa->pa_lstart >= end) to always hold when checking if the\ncurrent extent to be allocated crosses already preallocated blocks, so the\nresulting ac_g_ex may cross already preallocated blocks. Hence we convert\nthe end type to loff_t and use pa_logical_end() to avoid overflow.\n\nSigned-off-by: Baokun Li ","shortMessageHtmlLink":"ext4: avoid overlapping preallocations due to overflow"}},{"before":"dcc72b8c835c65733dc7ab6d6c215ad767439f21","after":null,"ref":"refs/tags/iomap-per-block-dirty-tracking","pushedAt":"2023-07-25T06:11:35.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"}},{"before":"4b810bf037e524b54669acbe4e0df54b15d87ea1","after":null,"ref":"refs/heads/notmuch-20230629144007.1263510-1-shikemeng@huaweicloud.com","pushedAt":"2023-07-22T04:46:28.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"}},{"before":"0981c190fd2a89da52adfdc0023223bd288f6b4f","after":"f22f705c459e3249d8324b7695e4fa5ce867674e","ref":"refs/heads/notmuch-20230721171007.2065423-1-shikemeng@huaweicloud.com","pushedAt":"2023-07-21T15:15:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: correct some stale comment of criteria\n\nWe named criteria with CR_XXX, correct stale comment to criteria with\nraw number.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: correct some stale comment of criteria"}},{"before":"dc102cbaa6ab9c50122b521f5cc0ba7beb1eebd1","after":"0981c190fd2a89da52adfdc0023223bd288f6b4f","ref":"refs/heads/notmuch-20230721171007.2065423-1-shikemeng@huaweicloud.com","pushedAt":"2023-07-21T14:52:05.000Z","pushType":"push","commitsCount":6,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: correct some stale comment of criteria\n\nWe named criteria with CR_XXX, correct stale comment to criteria with\nraw number.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: correct some stale comment of criteria"}},{"before":null,"after":"dc102cbaa6ab9c50122b521f5cc0ba7beb1eebd1","ref":"refs/heads/notmuch-20230721171007.2065423-1-shikemeng@huaweicloud.com","pushedAt":"2023-07-21T14:50:13.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"riteshharjani","name":"Ritesh Harjani","path":"/riteshharjani","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1408924?s=80&v=4"},"commit":{"message":"ext4: use is_power_of_2 helper in ext4_mb_regular_allocator\n\nUse intuitive is_power_of_2 helper in ext4_mb_regular_allocator.\n\nSigned-off-by: Kemeng Shi ","shortMessageHtmlLink":"ext4: use is_power_of_2 helper in ext4_mb_regular_allocator"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEDdLTMAA","startCursor":null,"endCursor":null}},"title":"Activity · riteshharjani/linux"}