Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs
-
Updated
Apr 1, 2024 - Go
Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs
Go package to read and write parquet files. parquet is a file format to store nested data structures in a flat columnar data format. It can be used in the Hadoop ecosystem and with tools such as Presto and AWS Athena.
A light-weight distributed stream computing framework for Golang
Export Hadoop YARN (resource-manager) metrics in prometheus format
Yarn on Docker - Managing Hadoop Yarn cluster with Docker Swarm.
Run templatable playbooks of Hadoop/Spark/et al jobs on Amazon EMR
Apache Hadoop HDFS operator for the Kubernetes Data Stack
An easy Hadoop deploy system
Kubernetes operator for managing the lifecycle of Apache Hadoop Yarn Tasks on Kubernetes.
Prometheus exporter of Hadoop JMX metrics
A parallel cloud computing framework based on the core principles of Apache Hadoop.
☁ Batch processing Word-Letter Count application with a customed k8s scheduler
Add a description, image, and links to the hadoop topic page so that developers can more easily learn about it.
To associate your repository with the hadoop topic, visit your repo's landing page and select "manage topics."