{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":356940246,"defaultBranch":"main","name":"yggdrasil","ownerLogin":"hxtk","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2021-04-11T17:56:39.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/5395707?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1703793110.0","currentOid":""},"activityList":{"items":[{"before":"952fa75f8cf1ab6c4dcd3d81f0f9bb32ecb7d8a2","after":"e919d3fc0d6123c1785fe8e559de85d38fe35e12","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-29T17:54:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"75f4283f6ba5b85eb11063692fc97f46ae4330f6","after":"952fa75f8cf1ab6c4dcd3d81f0f9bb32ecb7d8a2","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-29T17:52:31.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"ae6feec83afb8707055cbba81113bae4deed1b3b","after":"75f4283f6ba5b85eb11063692fc97f46ae4330f6","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-29T17:50:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"63c23771d9cfd38359d3027badf552bc1e9b8952","after":"ae6feec83afb8707055cbba81113bae4deed1b3b","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-29T05:26:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"4deb24dcff3dcaf954140ed128b44f16f0ced00b","after":"63c23771d9cfd38359d3027badf552bc1e9b8952","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-29T05:18:39.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"8b63f8397a2b12dd32490f4ae9a1a240c3445845","after":"4deb24dcff3dcaf954140ed128b44f16f0ced00b","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-28T21:15:02.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"c5030d8ebbf44fd5d51fccf1daef0f30f13e7488","after":"8b63f8397a2b12dd32490f4ae9a1a240c3445845","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-28T19:56:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"73d2d30f085a7fc236137bf6059d9db2f8fcbfb4","after":"c5030d8ebbf44fd5d51fccf1daef0f30f13e7488","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-28T19:54:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"a821ab494a3467eb5ab2586075f838cb5fb365be","after":"73d2d30f085a7fc236137bf6059d9db2f8fcbfb4","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-28T19:54:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":null,"after":"a821ab494a3467eb5ab2586075f838cb5fb365be","ref":"refs/heads/feature/mapreduce","pushedAt":"2023-12-28T19:51:50.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Implement MapReduce\n\nDefine a Map worker capable of accepting records from an arbitrary\nsource (using a user-defined record stream), processing those logs\nin a way consistent with the MapReduce API [1], and wiriting those\nlogs to a common intermediate format in sorted chunks decided by a\npartition function.\n\nDefine a Reduce worker capable of reading records from the common\nintermediate format and reducing those records in a manner consistent\nwith the reduce portion of the MapReduce API. These records are\nprocessed in sorted order by merging arbitrarily many sorted chunks\nas provided by the Map worker. The results are then committed to an\narbitrary destination, one per Reduce worker, using a user-defined\noutput record stream.\n\nImplement the marshaling of the common intermediate record format\nin a way that is consistent with the API of the user-defined input\nand output record streams.\n\nImplement a local API equivalent to GFS RecordAppend for the atomic\nmanipulation of Write-Ahead Log files by several processes.\n\nTODO: Implement a MapReduce master to coordinate these jobs.","shortMessageHtmlLink":"Implement MapReduce"}},{"before":"8a1b7d1ba1102902df3728c79ee55be3f3f84305","after":"b872092fd2ca2d7b83aad39644d10272e028d7a6","ref":"refs/heads/main","pushedAt":"2023-11-18T17:52:48.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Set test timeout values\n\nBazel now provides recommendations to help explicitly specify test\ntimeouts in order to ensure that tests do not hang forever in error\ncases when the expected result is for them to finish relatively quickly.\n\nAll current tests are consistent enough in time taken that they can be\nset without risk of timing out and flaking, so timeout values were set\naccording to Bazel's recommendations.","shortMessageHtmlLink":"Set test timeout values"}},{"before":"ccd9681036e694dc87bfc931ee6414173dd2aa21","after":"8a1b7d1ba1102902df3728c79ee55be3f3f84305","ref":"refs/heads/main","pushedAt":"2023-11-18T17:42:51.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Baseline bazel_gazelle compatibility\n\nAll Go dependencies were updated, which required some changes to the\nBUILD files in the repository. Rather than update them all manually with\nthe understanding that more manual updates would be required in the\nfuture, the project's BUILD files were all changed to be compatible with\ngazelle so that they can be automatically regenerated with a command:\n\n bazel run //:gazelle\n\nBy default, all APIs are compiled with all standard protobuf compilers.\nSome projects which do not use those compilers explicitly opt-out by\ndeclaring an overriding directive in their specific BUILD file.\n\nRunning gazelle also revealed some errors that had been made previously,\nwhich were fixed as they were discovered.","shortMessageHtmlLink":"Baseline bazel_gazelle compatibility"}},{"before":"f230e6dc25aca8caebfd099c43bfa81a3a77087d","after":"ccd9681036e694dc87bfc931ee6414173dd2aa21","ref":"refs/heads/main","pushedAt":"2023-11-18T00:14:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Add support for MacOS arm64 build targets\n\nThe upstream version of rules_oci has a known bug that interferes with\nrunning builds on MacOS with apple silicon because the jq binary does\nnot match the platform [1]. We patch with @archen's fork, which switches\nthat dependency to `yq`, which has platform-specific support.\n\nAdditionally, the latest release of `rules_anchore` has bugs that cause\nbuilds to fail on platforms other than linux_amd64 [2]. We switch to the\nlatest patch, which contains a fix for that issue. In doing so, the\nversion of the grype vulnerability database was no longer compatible\nwith the version of grype in use, so we updated the grype database to\nthe latests version associated with the grype version targeted.\n\n1: https://github.com/bazel-contrib/rules_oci/issues/253\n2: https://github.com/hxtk/rules_anchore/issues/12","shortMessageHtmlLink":"Add support for MacOS arm64 build targets"}},{"before":"b0b8d5e14eb9cad3c48da26aadc3ab98532bd5f8","after":"f230e6dc25aca8caebfd099c43bfa81a3a77087d","ref":"refs/heads/main","pushedAt":"2023-10-17T22:05:24.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"hxtk","name":"Peter Sanders","path":"/hxtk","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5395707?s=80&v=4"},"commit":{"message":"Replace rules_docker with rules_oci\n\nBecause rules_grype expects the tarball to end in a `.tar`\nsuffix and otherwise assumes the base image has been provided,\n`oci_tarball` targets were created for each image with names\ncorresponding to the implicit `.tar` rule previously provided by\nrules_docker. Otherwise, `rules_oci` was able to be used as a\ndrop-in replacement.","shortMessageHtmlLink":"Replace rules_docker with rules_oci"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAD1DgMCQA","startCursor":null,"endCursor":null}},"title":"Activity ยท hxtk/yggdrasil"}