Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
-
Updated
Jun 5, 2024 - Java
Apache Linkis builds a computation middleware layer to facilitate connection, governance and orchestration between the upper applications and the underlying data engines.
MapReduce, Spark, Java, and Scala for Data Algorithms Book
Hopsworks - Data-Intensive AI platform with a Feature Store
Java library for approximate nearest neighbors search using Hierarchical Navigable Small World graphs
This project demonstrates how to use Apache Airflow to submit jobs to Apache spark cluster in different programming laguages using Python, Scala and Java as an example.
SUTD 2021 50.043 Database and Big Data Systems Code Dump
The Project and workaround repository to generate a producer stream to kafka cluster, consume and then process it.
Custom AEMO MMS Data Model CSV reader for Apache Spark
MapReduce Job Development, RDDs Programming, Medical Data Management, Sales Analysis, And Efficient Data Integration For Big Data Analysis. Spark: Big Data Processing, SQOOP Integration, And Spark Structured Streaming For Real-Time Data.
Example of using apache spark libraries to implement machine learning algorithms.
Assignment for UoM lesson "Big Data"
B2C Online Education Website, Development Model of Separation of Frontend and Backend, MVC Design Pattern, Course Recommendation System
Implementation of Hadoop and Spark
Add a description, image, and links to the pyspark topic page so that developers can more easily learn about it.
To associate your repository with the pyspark topic, visit your repo's landing page and select "manage topics."