{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":522549170,"defaultBranch":"main","name":"timescaledb","ownerLogin":"sb230132","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2022-08-08T12:55:17.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/6995422?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1694504554.0","currentOid":""},"activityList":{"items":[{"before":"ba9b81854c8c94005793bccff29433f6086e5274","after":"646cecd14d0f3fe5ca59c8fe6d429226c789a91f","ref":"refs/heads/main","pushedAt":"2023-09-16T09:07:21.000Z","pushType":"push","commitsCount":7,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Add API function for updating OSM chunk ranges\n\nThis commit introduces a function `hypertable_osm_range_update`\nin the _timescaledb_functions schema. This function is meant to serve\nas an API call for the OSM extension to update the time range\nof a hypertable's OSM chunk with the min and max values present\nin the contiguous time range its tiered chunks span.\nIf the range is not contiguous, then it must be set to the invalid\nrange an OSM chunk is assigned upon creation.\nA new status field is also introduced in the hypertable catalog\ntable to keep track of whether the ranges covered by tiered and\nnon-tiered chunks overlap.\nWhen there is no overlap detected then it is possible to apply the\nOrdered Append optimization in the presence of OSM chunks.","shortMessageHtmlLink":"Add API function for updating OSM chunk ranges"}},{"before":"93519d0af81f1ed121a0a30525d2008a89535455","after":"ba9b81854c8c94005793bccff29433f6086e5274","ref":"refs/heads/main","pushedAt":"2023-09-14T10:38:31.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Support for partial aggregations at chunk level\n\nThis patch adds support for partial aggregations at the chunk level.\nThe aggregation is replanned in the create_upper_paths_hook of\nPostgreSQL. The AggPath is split up into multiple\nAGGSPLIT_INITIAL_SERIAL operations (one on top of each chunk), which\ncreate partials, and one AGGSPLIT_FINAL_DESERIAL operation, which\nfinalizes the aggregation.","shortMessageHtmlLink":"Support for partial aggregations at chunk level"}},{"before":"9155f212939b795bd051213ca9196f1af40b578e","after":"962668e212fe48ea30b64cd937497b4650231c0c","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-12T13:26:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"4db3e856cba695482441bf934fb78fb474b95197","after":"9155f212939b795bd051213ca9196f1af40b578e","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-12T12:32:44.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":null,"after":"93519d0af81f1ed121a0a30525d2008a89535455","ref":"refs/heads/fix_slow_inserts","pushedAt":"2023-09-12T07:42:34.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Function approximate_row_count returns 0 for caggs (#6053)\n\nThe approximate_row_count function is executed directly on the user view\r\ninstead of corresponding materialized hypertable which returns 0 for\r\ncaggs. The function is updated to fetch the details for materialized\r\nhypertable information corresponding to cagg and then get the\r\napproximate_row_count for the materialized hypertable.\r\n\r\nFixes #6051","shortMessageHtmlLink":"Function approximate_row_count returns 0 for caggs (timescale#6053)"}},{"before":"ef783c4b559e3a10f1274d0aac46fb995230080a","after":"93519d0af81f1ed121a0a30525d2008a89535455","ref":"refs/heads/main","pushedAt":"2023-09-12T07:42:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Function approximate_row_count returns 0 for caggs (#6053)\n\nThe approximate_row_count function is executed directly on the user view\r\ninstead of corresponding materialized hypertable which returns 0 for\r\ncaggs. The function is updated to fetch the details for materialized\r\nhypertable information corresponding to cagg and then get the\r\napproximate_row_count for the materialized hypertable.\r\n\r\nFixes #6051","shortMessageHtmlLink":"Function approximate_row_count returns 0 for caggs (timescale#6053)"}},{"before":"96ec65c93e8baf4a09f5bbc9e3e69b38cd19738d","after":"4db3e856cba695482441bf934fb78fb474b95197","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-12T01:32:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"44e41c12ab25e36c202f58e068ced262eadc8d16","after":"ef783c4b559e3a10f1274d0aac46fb995230080a","ref":"refs/heads/main","pushedAt":"2023-09-09T05:49:26.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Server crash when using duplicate segmentby column (#6044)\n\n Server crash when using duplicate segmentby column\r\n \r\n The segmentby column info array is populated by using the column\r\n attribute number as an array index. This is done as part of validating\r\n and creating segment by column info in function `compresscolinfo_init`.\r\n \r\n Since the column is duplicated the attribute number for both the\r\n segmentby column is same. When this attribute number is used as an\r\n index, only one of the array element is populated correctly with the\r\n detailed column info whareas the other element of the array ramins\r\n NULL. This segmentby column info is updated in catalog as part of\r\n processing compression options (ALTER TABLE ...).\r\n \r\n When the chunk is being compressed this segmentby column information is\r\n being retrieved from the catalog to create the scan key in order to\r\n identify any existing index on the table that matches the segmentby\r\n column. Out of the two keys one key gets updated correctly whereas the\r\n second key contains NULL values. This results into a crash during index\r\n scan to identify any existing index on the table.\r\n \r\n The proposed change avoid this crash by raising an error if user has\r\n specified duplicated columns as part of compress_segmentby or\r\n compress_orderby options.\r\n \r\n Also, added postgresql-client package in linux-32bit build dependencies\r\n to avoid failure as part of uploading the regression results.","shortMessageHtmlLink":"Server crash when using duplicate segmentby column (timescale#6044)"}},{"before":"7104cc660bb85b4a494d82334e890e7e4f94772a","after":"96ec65c93e8baf4a09f5bbc9e3e69b38cd19738d","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-07T14:12:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"e66a40038e3c84fb1a68da67ad71caf75c64a027","after":"44e41c12ab25e36c202f58e068ced262eadc8d16","ref":"refs/heads/main","pushedAt":"2023-09-07T07:48:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Fix segfault in set_integer_now_func\n\nWhen an invalid function oid is passed to set_integer_now_func, it finds\nout that the function oid is invalid but before throwing the error, it\ncalls ReleaseSysCache on an invalid tuple causing a segfault. Fixed that\nby removing the invalid call to ReleaseSysCache.\n\nFixes #6037","shortMessageHtmlLink":"Fix segfault in set_integer_now_func"}},{"before":"cb8137f6c04c5fd517727b20a18ad7def515dd99","after":"7104cc660bb85b4a494d82334e890e7e4f94772a","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-07T07:45:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"cb8137f6c04c5fd517727b20a18ad7def515dd99","after":null,"ref":"refs/heads/trigger/sanitizer","pushedAt":"2023-09-06T09:27:34.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"}},{"before":"0031cd12e3c4a031f62ea1f4be397bbd77acebbc","after":null,"ref":"refs/heads/fix-bgw-custom","pushedAt":"2023-09-06T04:43:48.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"}},{"before":"66b15ad408a49cf62e58aea1c5637bf9d29e9fa4","after":null,"ref":"refs/heads/merge_support","pushedAt":"2023-09-06T04:43:44.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"}},{"before":"6fc3a9bc84f9a4fac9beb5de3dd785d9496814b7","after":null,"ref":"refs/heads/update_bug","pushedAt":"2023-09-06T04:43:26.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"}},{"before":null,"after":"cb8137f6c04c5fd517727b20a18ad7def515dd99","ref":"refs/heads/trigger/sanitizer","pushedAt":"2023-09-06T04:39:05.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"8a61d4035663f037e92ee86e7fcfbdde7da88e3a","after":"cb8137f6c04c5fd517727b20a18ad7def515dd99","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-05T16:52:45.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result in full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"308295fab8769891b939815192ef40af107939b1","after":"8a61d4035663f037e92ee86e7fcfbdde7da88e3a","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-05T07:26:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompress\nall compressed segments and recompress them.\nA new guc variable ts_guc_enable_recompression_optimization\nis introduced which when disabled will result full recompression\nwithout any optimization.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"d13592b7519cfac62d3b3c828796cf63ee2f8787","after":"308295fab8769891b939815192ef40af107939b1","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-05T06:23:55.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompres\nall compressed segments and recompress them.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"95e220aad8aafd00ab6ab0f5af78dbfccc7a0678","after":"b5ec87c5096ff290697bc140e53f088e7232d13f","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T17:02:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":"417301bb5d0a0657dee420fa3c5575592dfb22e1","after":"95e220aad8aafd00ab6ab0f5af78dbfccc7a0678","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T14:57:43.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":"97866e9ccffe800d385b1490fffb8a0a83348266","after":"417301bb5d0a0657dee420fa3c5575592dfb22e1","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T14:02:46.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":"337d2e5371709cc3943a21f1518085bd79153d05","after":"97866e9ccffe800d385b1490fffb8a0a83348266","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T12:34:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":"c6a930897e9f9e9878db031cc7fb6ea79d721a74","after":"e66a40038e3c84fb1a68da67ad71caf75c64a027","ref":"refs/heads/main","pushedAt":"2023-09-04T12:25:16.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Fix server crash on UPDATE of compressed chunk\n\nUPDATE query with system attributes in WHERE clause causes\nserver to crash. This patch fixes this issue by checking for\nsystem attributes and handle cases only for segmentby attributes\nin fill_predicate_context().\n\nFixes #6024","shortMessageHtmlLink":"Fix server crash on UPDATE of compressed chunk"}},{"before":"0ad47c56510ba1b5e69d28831967e65f1e5016ed","after":"337d2e5371709cc3943a21f1518085bd79153d05","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T11:44:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":"c6a930897e9f9e9878db031cc7fb6ea79d721a74","after":"0ad47c56510ba1b5e69d28831967e65f1e5016ed","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T11:10:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Analyze windows test failure.","shortMessageHtmlLink":"Analyze windows test failure."}},{"before":null,"after":"c6a930897e9f9e9878db031cc7fb6ea79d721a74","ref":"refs/heads/recompress_win","pushedAt":"2023-09-04T10:50:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Use Debian Bookworm for 32-bit tests\n\nSo far, we have used Debian Buster (10) for our 32-bit tests. This\ndistribution is EOL in ~1 year and contains an old LLVM version (7.0).\nLLVM 7 contains a few bugs that break the JIT functionality of\nPostgreSQL (missing mixed-sign 64-bit operands on 32-bit architectures /\nfailed to resolve name __mulodi4).\n\nThis patch changes the used Distribution for 32-bit tests to Debian\nBookworm (12 / LLVM 14). Since the PostgreSQL download server no longer\noffers 32-bit Debian packages, PostgreSQL is built from source.","shortMessageHtmlLink":"Use Debian Bookworm for 32-bit tests"}},{"before":"8ca5276588fb9f22d01924c4a2190dc46fdfacfe","after":"d13592b7519cfac62d3b3c828796cf63ee2f8787","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-04T09:18:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompres\nall compressed segments and recompress them.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"d2a75c26973873924b56e9c6d96857ee9360a00d","after":"8ca5276588fb9f22d01924c4a2190dc46fdfacfe","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-04T05:27:36.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompres\nall compressed segments and recompress them.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}},{"before":"f5e96bfad88df03a862868de9ed8c318adda7ed2","after":"d2a75c26973873924b56e9c6d96857ee9360a00d","ref":"refs/heads/recompress_optimization","pushedAt":"2023-09-03T16:18:27.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"sb230132","name":"Bharathy","path":"/sb230132","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6995422?s=80&v=4"},"commit":{"message":"Refactor recompress_chunk() API\n\nThis patch will improve performance of recompress_chunk() API\nby identifying only affected segments which needs to be\nrecompressed. All unaffected segments are never decompressed.\nIf affected compressed segments exceeds a certain limit we\nfallback to legacy way of recompressing, where we decompres\nall compressed segments and recompress them.\n\nFixes #392","shortMessageHtmlLink":"Refactor recompress_chunk() API"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAADgfgCyQA","startCursor":null,"endCursor":null}},"title":"Activity ยท sb230132/timescaledb"}