{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":4339773,"defaultBranch":"develop","name":"htslib","ownerLogin":"samtools","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-05-15T19:34:48.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1518450?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1713194412.0","currentOid":""},"activityList":{"items":[{"before":"1e7efc0b9fb2472453dc22ccf30f57a6818d8585","after":"9a99a1d574a0438d7f4e8a81e60b315f653f4b68","ref":"refs/heads/develop","pushedAt":"2024-05-02T13:37:05.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Check interval start to avoid overflowing bin numbers\n\nCheck start positions of query intervals against the maximum position\nrepresentable in the index's geometry, to avoid negative bin numbers\nand the resulting infinite loops in the do...while loop.\n\nIntroduce hts_bin_maxpos() and hts_idx_maxpos(), and use them wherever the\nmaxpos calculation appears. (Leave the latter private, at least for now.)\n\nAlso change the existing end checks to <= as end is exclusive -- note it\nis used as end-1 in the code guarded by the checks.","shortMessageHtmlLink":"Check interval start to avoid overflowing bin numbers"}},{"before":"c93f5a57e63bc594a291b145407f1d8fcbef59bd","after":"1e7efc0b9fb2472453dc22ccf30f57a6818d8585","ref":"refs/heads/develop","pushedAt":"2024-05-02T08:28:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"fix fuzz integer overflow in cram encoder.\n\nInput files with very long CIGAR strings and consensus generated\nembedded reference can lead to exceptionally long CRAM blocks which\noverflow the check for large size fluctuations (to trigger new\ncompression metric assessments).\n\nReformulated the expression to avoid scaling up values.\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 68225","shortMessageHtmlLink":"fix fuzz integer overflow in cram encoder."}},{"before":"deeb9f01376ca9416315e4c9f5fe489e6f03e05f","after":"c93f5a57e63bc594a291b145407f1d8fcbef59bd","ref":"refs/heads/develop","pushedAt":"2024-04-30T11:11:03.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Update htscodecs version to fix compiler void pedantry.","shortMessageHtmlLink":"Update htscodecs version to fix compiler void pedantry."}},{"before":"6a7d33abc6cae840023868ccdd946d0d8759f259","after":"0cadce238af0c6398751999bad703d4b19615860","ref":"refs/heads/master","pushedAt":"2024-04-15T15:20:12.000Z","pushType":"push","commitsCount":34,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Release 1.20","shortMessageHtmlLink":"Release 1.20"}},{"before":"a67b53c2a2d2fe54be93362d3f8f250378b9dda3","after":"deeb9f01376ca9416315e4c9f5fe489e6f03e05f","ref":"refs/heads/develop","pushedAt":"2024-04-15T15:20:12.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Merge version number bump and NEWS file from master","shortMessageHtmlLink":"Merge version number bump and NEWS file from master"}},{"before":"1cdc7984f3f14c2c797c5af654a5e0b5667c4ec6","after":"a67b53c2a2d2fe54be93362d3f8f250378b9dda3","ref":"refs/heads/develop","pushedAt":"2024-04-12T09:15:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Further recommend use of libdeflate and list OS packages","shortMessageHtmlLink":"Further recommend use of libdeflate and list OS packages"}},{"before":"0cc34b3dcc869d3d6474460a51175cc371204e69","after":"1cdc7984f3f14c2c797c5af654a5e0b5667c4ec6","ref":"refs/heads/develop","pushedAt":"2024-04-11T13:45:59.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Add NEWS ready for 1.20 release","shortMessageHtmlLink":"Add NEWS ready for 1.20 release"}},{"before":"c1247f9e7eb2a32291cb375e90d303a0ee9dcf73","after":"0cc34b3dcc869d3d6474460a51175cc371204e69","ref":"refs/heads/develop","pushedAt":"2024-04-05T15:38:01.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Spring 2024 copyright update.","shortMessageHtmlLink":"Spring 2024 copyright update."}},{"before":"3cfe87690d047c06ec0d29a859c930b635a42e96","after":"c1247f9e7eb2a32291cb375e90d303a0ee9dcf73","ref":"refs/heads/develop","pushedAt":"2024-03-27T11:13:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Ensure S3 redirects use TLS\n\nWhen following a 3xx redirection from AWS,\nredirect_endpoint_callback() wasn't putting 'https://' on the\nnew URL. The redirection worked, but dropped back to http.\nFix by prepending 'https://' to ensure it uses TLS, and\nadding a bit of error checking to ensure all parts of the\nnew url have been included.","shortMessageHtmlLink":"Ensure S3 redirects use TLS"}},{"before":"78e507dbd8a0567c7f3c8c1e265d36218e3f0e77","after":"3cfe87690d047c06ec0d29a859c930b635a42e96","ref":"refs/heads/develop","pushedAt":"2024-03-26T15:05:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Drop duplicate tbx_conf_t","shortMessageHtmlLink":"Drop duplicate tbx_conf_t"}},{"before":"6ea61bfe531edd387ee01ca91b049845ac0d841d","after":"78e507dbd8a0567c7f3c8c1e265d36218e3f0e77","ref":"refs/heads/develop","pushedAt":"2024-03-21T12:14:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Ensure hFILE_scheme_handler open and isremote are set\n\nCheck that hfile plugins have supplied open() and\nisremote() functions in their hFILE_scheme_handler struct,\nand refuse to add them if not. Failing to check this\ncould lead to an attempt to call a NULL pointer when\nthe interfaces are used.\n\nFix up the \"crypt4gh-needed\" scheme handler, which did not\nsupply isremote(); and \"mem\" which failed to supply open().\n\nThanks to John Marshall for suggested validation code\nin hfile_add_scheme_handler().\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 67349","shortMessageHtmlLink":"Ensure hFILE_scheme_handler open and isremote are set"}},{"before":"ca0f6214b94adf9278cbcaaefd50f5fe9455f9ad","after":"6ea61bfe531edd387ee01ca91b049845ac0d841d","ref":"refs/heads/develop","pushedAt":"2024-03-21T11:15:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"pd3","name":"Petr Danecek","path":"/pd3","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/524074?s=80&v=4"},"commit":{"message":"Fix duplicated but missing FORMAT bug\n\n7ce510c added code to drop duplicate FORMAT tags, but missed the\ncase where the duplicated entry was off the end of the per-sample\ndata, so it hit the part that put in MISSING values. This lead\nto an attempt to call memset() with a negative size. Fixed by\nadding code to skip the duplicated FORMAT tag.\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 67431","shortMessageHtmlLink":"Fix duplicated but missing FORMAT bug"}},{"before":"55cafdc9434f3141019cda7274c7a930a4ddd361","after":"ca0f6214b94adf9278cbcaaefd50f5fe9455f9ad","ref":"refs/heads/develop","pushedAt":"2024-03-15T09:23:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"[minor] Change error to warning.\n\nIt looks like it was meant to be a warning rather than an error message.","shortMessageHtmlLink":"[minor] Change error to warning."}},{"before":"7d3efee742cd13a5b23c057ee29a71a51c6f94a6","after":"55cafdc9434f3141019cda7274c7a930a4ddd361","ref":"refs/heads/develop","pushedAt":"2024-03-14T18:08:03.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Adjust the expected output for malformed VCF","shortMessageHtmlLink":"Adjust the expected output for malformed VCF"}},{"before":"3e54663232dd3ce80eae44de2093a2f34ff901de","after":"7d3efee742cd13a5b23c057ee29a71a51c6f94a6","ref":"refs/heads/develop","pushedAt":"2024-03-07T16:48:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Fix S3 virtual path and add redirect capability (PR #1756)\n\nCurrently only s3 pathing works with a redirect;\r\nWith this fix s3 virtual gets a virtual redirect url;\r\nadditionally, performs an extra redirect if necessary.","shortMessageHtmlLink":"Fix S3 virtual path and add redirect capability (PR #1756)"}},{"before":"255dfcbfa2cfbb8fcb4735b7c3bee5744c30b3f7","after":"3e54663232dd3ce80eae44de2093a2f34ff901de","ref":"refs/heads/develop","pushedAt":"2024-03-07T16:12:54.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Delay closing the index file when indexing on-the-fly\n\nThis is to ensure the timestamp on the index file is later than the\none on the file being indexed, preventing spurious \"The index file\nis older than the data file\" messages when it's used. The delay\nis necessary because the main file EOF block may not have been\nwritten when hts_idx_save_as() has been called.\n\nReworks the idx_save functions to add one that keeps the index\nhandle open, storing it in the hts_idx_t struct. hts_close()\nchecks for this, and closes the index file if it finds one\nafter having closed the file is was passed. Unfortunately this\nmeans hts_close() will report any errors that happen when the\nindex file is closed. To reduce the chance of that happening,\nthe index writer calls bgzf_flush() to reduce the amount of\nwork that the final bgzf_close() on the index has to do.\n\nAn unfortunate wrinkle is that to set the timestamp on the\nindex file, we need to ensure some data is written just before\nthe file is closed. This is find for CSI indexes as they're\nBGZF compressed and we write an EOF block. For uncompressed\nBAI indexes, we instead use an ugly hack of keeping the last\nfew bytes back until we want to close the file. This is\nhorrible, but I can't think of a better way to get the result\nwe want.\n\nFinally, it turned out that calling bgzf_flush() when the\nfile has been opened in uncompressed mode (\"u\") crashed\ndue to a NULL pointer dereference. It now more usefully\nflushes the underlying file.","shortMessageHtmlLink":"Delay closing the index file when indexing on-the-fly"}},{"before":"5d2c3f721d78906486f1759b8cda87649a14c684","after":"255dfcbfa2cfbb8fcb4735b7c3bee5744c30b3f7","ref":"refs/heads/develop","pushedAt":"2024-03-07T14:52:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"added thread pool to tabix operations","shortMessageHtmlLink":"added thread pool to tabix operations"}},{"before":"6d0dd0025811744668ad82ec5f8bec6bf151f16e","after":"5d2c3f721d78906486f1759b8cda87649a14c684","ref":"refs/heads/develop","pushedAt":"2024-02-28T19:56:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"yfarjoun","name":"Yossi Farjoun","path":"/yfarjoun","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2745554?s=80&v=4"},"commit":{"message":"Update bgzip.1","shortMessageHtmlLink":"Update bgzip.1"}},{"before":"3e11d0e335bfdf34db3ba5f61d52d1d19a60bdfe","after":"6d0dd0025811744668ad82ec5f8bec6bf151f16e","ref":"refs/heads/develop","pushedAt":"2024-02-27T16:45:11.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Extend pileup test to include quality values.\n\nWithout this we cannot test that the overlap removal code is working,\nwhich operates by zeroing quality values.","shortMessageHtmlLink":"Extend pileup test to include quality values."}},{"before":"7db7e8371d5230bf222d55852880001979ae8e93","after":"3e11d0e335bfdf34db3ba5f61d52d1d19a60bdfe","ref":"refs/heads/develop","pushedAt":"2024-02-23T12:00:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Bug fix the recent bam_plp_destroy memory leak removal\n\nThe pileup interface maintains a linked list of lbnode_t pointers.\nThese start at iter->head, chain through lbnode->next, and then end up\nat iter->tail. We also have a separate linked list in iter->mp of\nitems we've previously freed, so we don't have to free and malloc\ncontinually.\n\nbam_plp64_next adds and removes items to these linked lists, and it\ncalls iter->plp_destruct when it puts items into the free list. So\nfar so good, and so correct.\n\nHowever if for whatever reason we bail out of the pileup interface\nearly, before we've punted all records onto the iter->mp free list,\nthen we weren't calling plp_destruct on the current \"in flight\" data.\nThis caused a memory leak, fixed in d028e0d.\n\nUnfortunately there is a subtlety I didn't notice at the time. The\nin-flight linked list goes from iter->head to *one before*\niter->tail. The tail is simply a dummy node and unused by the code.\nI don't understand why it has to work this way, but presumably someone\ndidn't want iter->head and iter->tail to ever point to the same item.\n\nThe bam_plp_destroy function however has to move all these items to\nthe iter->mp free list, so here it goes from iter->head to iter->tail\ninclusively. This commit avoids attempting to call the destructor on\nthe tail, which could be a previously freed item that was pulled back\noff the iter->mp list, leading to double frees.","shortMessageHtmlLink":"Bug fix the recent bam_plp_destroy memory leak removal"}},{"before":"fdbafae580257734080fd6b91b9d73839a959b06","after":"7db7e8371d5230bf222d55852880001979ae8e93","ref":"refs/heads/develop","pushedAt":"2024-02-23T09:36:39.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Call the pileup destructor in bam_plp_destroy.\n\nThis frees memory when destroying earlier than expected, such as\nduring a processing failure.\n\nI can't figure out how this has been missed all these years!","shortMessageHtmlLink":"Call the pileup destructor in bam_plp_destroy."}},{"before":"98c2667326a15a81e476c188aa15a017ae76a921","after":"fdbafae580257734080fd6b91b9d73839a959b06","ref":"refs/heads/develop","pushedAt":"2024-02-19T16:11:30.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Further speed up sam_parse_B_vals.\n\nPreviously it parses the B string twice. Once to count commas and\nallocate memory and once to fill out the memory.\n\nNow the code reallocates periodically and thus only needs a single\npass. The effect on large B arrays is significant.\n\n 2-pass 1-pass develop\n gcc7 -O2: 5565824443 3299046885\n gcc7 -O3: 5779736469 3400756477\n gcc13 -O2: 5565893109 3086808341\n gcc13 -O3: 5392426978 3346007015 9724589000\n clang10 -O2: 5344657729 3465765165\n clang10 -O3: 5348030140 3460058513\n clang16 -O2: 4563321159 3374951558\n clang16 -O3: 4575986193 3311061338 6398268577\n\nSpeed instability was still observed by modifying code elsewhere so\nthis has been improved by splitting up the function and adding\nfunction alignment requests. We could achieve a similar result by\ncompiler options such as -falign-loops=32, but this affects all code\nand we have not evaluated the impact elsewhere.","shortMessageHtmlLink":"Further speed up sam_parse_B_vals."}},{"before":"f19b844b45e5e9dc6050d28fba02aba99edecb8f","after":"98c2667326a15a81e476c188aa15a017ae76a921","ref":"refs/heads/develop","pushedAt":"2024-02-19T15:20:33.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Add check for .gzi when determining whether to rebuild the fai index.\n\nFixes #1744","shortMessageHtmlLink":"Add check for .gzi when determining whether to rebuild the fai index."}},{"before":"34031e91070843a33a18002dfcb09562232f675f","after":"f19b844b45e5e9dc6050d28fba02aba99edecb8f","ref":"refs/heads/develop","pushedAt":"2024-02-19T14:50:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Added tests and changes to make the test work.","shortMessageHtmlLink":"Added tests and changes to make the test work."}},{"before":"4ff46a6f609fbf886457bbab0f3253446b46a541","after":"34031e91070843a33a18002dfcb09562232f675f","ref":"refs/heads/develop","pushedAt":"2024-02-19T14:21:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"whitwham","name":"Andrew Whitwham","path":"/whitwham","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/9553034?s=80&v=4"},"commit":{"message":"Fix indexing bug by flushing BCF bgzf stream after header write\n\nbcf_idx_init() calls bgzf_tell() to get the starting index offset.\nThis was OK when single-threaded but broke with multiple threads\nbecause bgzf_tell() lies about the file offset unless bgzf_flush()\nwas called first. SAM.gz, BAM and VCF.gz all did this, but BCF\ndidn't leading to an incorrect first index entry when combining\nmulti-threads with indexing on the fly. Fix by adding the missing\nbgzf_flush() after writing the header.\n\nAs a side benefit, the BCF variant records will now start in\na fresh BGZF block, instead of being mixed in with part of the\nBCF header.\n\ntest/index.bcf.csi has to be replaced due to the extra flush\nadding one more block to the (uncompressed) index.bcf file that\ngets generated by the test harness.","shortMessageHtmlLink":"Fix indexing bug by flushing BCF bgzf stream after header write"}},{"before":"dd9c5616b1ab267eedbe6b39cb8f71dae95ae642","after":"4ff46a6f609fbf886457bbab0f3253446b46a541","ref":"refs/heads/develop","pushedAt":"2024-02-15T15:25:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Make faidx error messages slightly more enlightening","shortMessageHtmlLink":"Make faidx error messages slightly more enlightening"}},{"before":"a6a6350ec24c043dad6d12d213e7e62d8f2d93fe","after":"dd9c5616b1ab267eedbe6b39cb8f71dae95ae642","ref":"refs/heads/develop","pushedAt":"2024-02-09T12:11:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"bgzip modified and access time set as source file","shortMessageHtmlLink":"bgzip modified and access time set as source file"}},{"before":"7278dabf370f5bb18b02c8fbbbf15ad59ce6712c","after":"a6a6350ec24c043dad6d12d213e7e62d8f2d93fe","ref":"refs/heads/develop","pushedAt":"2024-02-08T15:53:57.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Fix arithmetic overflow in load_ref_portion() on very long refs\n\nWhile the Mistletoe reference has been chopped into segments of\nless than 2^31 bases, they were still long enough to cause an\noverflow in the load_ref_portion() `len` calculation. This was\ndue to the line endings taking the total over INT_MAX. Fix\nbe changing the data type of `start` and `end` to `hts_pos_t`\nso the entire calculation is done on 64-bit values.\n\nCallers to load_ref_portion() are also updated where necessary\nto use hts_pos_t, making long references more likely to work\nshould they ever be supported in CRAM. All callers of callers\nwere already using 64-bit values due to earlier upgrades.\n\nThere's a potential complication over calculating the MD5\nchecksums as the exported hts_md5_update() function takes an\nunsigned long for the length. On 64-bit platforms with 32-bit\nunsigned long (i.e. Windows) it is necessary to add a loop if\nthe reference is a very long one. On platforms with 64-bit\nlong a single call is still used, and the loop should be optimised\nout.\n\nFixes #1734 (CRAM load_ref_portion() fails on some Mistletoe\nreferences)","shortMessageHtmlLink":"Fix arithmetic overflow in load_ref_portion() on very long refs"}},{"before":"65ae5744347c9403c061585fa2fc9f5262f2f977","after":"7278dabf370f5bb18b02c8fbbbf15ad59ce6712c","ref":"refs/heads/develop","pushedAt":"2024-02-02T15:09:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"jkbonfield","name":"James Bonfield","path":"/jkbonfield","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/2210525?s=80&v=4"},"commit":{"message":"Fix possible heap overflow in cram_encode_aux() on bad RG:Z tags\n\nRG:Z tags without a proper NUL termination could lead to use of\ninvalid data, or a heap overflow when the tag is passed to\nsam_hrecs_find_rg(), or hts_log_warning() if the former returns\nNULL. Fix by moving the line that skips to the end of the aux\ntag and then checking that it was terminated correctly, failing\nif it was not.\n\nSimilar checks are also added for MD:Z and generic Z- or H- type\ntags, to prevent generation of unreadable CRAM files.\n\nCredit to OSS-Fuzz\nFixes oss-fuzz 66369","shortMessageHtmlLink":"Fix possible heap overflow in cram_encode_aux() on bad RG:Z tags"}},{"before":"5627ef618eda03eadd57cd5cbd8b65f15ff7dfdc","after":"65ae5744347c9403c061585fa2fc9f5262f2f977","ref":"refs/heads/develop","pushedAt":"2024-01-22T12:02:48.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"daviesrob","name":null,"path":"/daviesrob","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/3234562?s=80&v=4"},"commit":{"message":"Merge version number bump and NEWS file from master","shortMessageHtmlLink":"Merge version number bump and NEWS file from master"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEP9d_8QA","startCursor":null,"endCursor":null}},"title":"Activity ยท samtools/htslib"}