Skip to content

Latest commit

 

History

History
127 lines (82 loc) · 8.46 KB

big-data-cluster-overview.md

File metadata and controls

127 lines (82 loc) · 8.46 KB
title titleSuffix description author ms.author ms.reviewer ms.date ms.service ms.subservice ms.topic ms.custom
Introducing Big Data Clusters
SQL Server Big Data Clusters
Learn about SQL Server Big Data Clusters that run on Kubernetes and provide scale-out options for both relational and HDFS data.
WilliamDAssafMSFT
wiassaf
hudequei, randolphwest
07/09/2024
sql
big-data-cluster
overview
intro-overview

Introducing SQL Server Big Data Clusters

[!INCLUDE SQL Server 2019]

[!INCLUDE big-data-clusters-banner-retirement]

In [!INCLUDE SQL Server 2019], [!INCLUDE big-data-cluster] allow you to deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes. These components are running side by side to enable you to read, write, and process big data from Transact-SQL or Spark, allowing you to easily combine and analyze your high-value relational data with high-volume big data.

Get started

Big data clusters architecture

The following diagram shows the components of a SQL Server big data cluster:

:::image type="content" source="media/big-data-cluster-overview/architecture-diagram-overview.png" alt-text="Diagram of Big data clusters architecture overview." lightbox="media/big-data-cluster-overview/architecture-diagram-overview.png":::

Controller

The controller provides management and security for the cluster. It contains the control service, the configuration store, and other cluster-level services such as Kibana, Grafana, and Elastic Search.

Compute pool

The compute pool provides computational resources to the cluster. It contains nodes running SQL Server on Linux pods. The pods in the compute pool are divided into SQL Compute instances for specific processing tasks.

Data pool

The data pool is used for data persistence. The data pool consists of one or more pods running SQL Server on Linux. It's used to ingest data from SQL queries or Spark jobs.

Storage pool

The storage pool consists of storage pool pods comprised of SQL Server on Linux, Spark, and HDFS. All the storage nodes in a SQL Server big data cluster are members of an HDFS cluster.

Tip

For an in-depth look into big data cluster architecture and installation, see Workshop: Microsoft SQL Server Big Data Clusters Architecture.

App pool

Application deployment enables the deployment of applications on a SQL Server Big Data Clusters by providing interfaces to create, manage, and run applications.

Scenarios and features

[!INCLUDE big-data-cluster] provide flexibility in how you interact with your big data. You can query external data sources, store big data in HDFS managed by SQL Server, or query data from multiple external data sources through the cluster. You can then use the data for AI, machine learning, and other analysis tasks.

Use [!INCLUDE big-data-cluster] to:

  • Deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes.
  • Read, write, and process big data from Transact-SQL or Spark.
  • Easily combine and analyze high-value relational data with high-volume big data.
  • Query external data sources.
  • Store big data in HDFS managed by SQL Server.
  • Query data from multiple external data sources through the cluster.
  • Use the data for AI, machine learning, and other analysis tasks.
  • Deploy and run applications in [!INCLUDE big-data-clusters].
  • Virtualize data with PolyBase. Query data from external SQL Server, Oracle, Teradata, MongoDB, and generic ODBC data sources with external tables.
  • Provide high availability for the SQL Server master instance and all databases by using Always On availability group technology.

The following sections provide more information about these scenarios.

Data virtualization

By leveraging PolyBase, [!INCLUDE big-data-cluster] can query external data sources without moving or copying the data. [!INCLUDE SQL Server 2019] introduces new connectors to data sources, for more information see What's new in PolyBase 2019?.

:::image type="content" source="media/big-data-cluster-overview/data-virtualization.png" alt-text="Diagram of Data virtualization.":::

Data lake

A SQL Server big data cluster includes a scalable HDFS storage pool. This can be used to store big data, potentially ingested from multiple external sources. Once the big data is stored in HDFS in the big data cluster, you can analyze and query the data and combine it with your relational data.

:::image type="content" source="media/big-data-cluster-overview/data-lake.png" alt-text="Diagram of Data lake.":::

Integrated AI and machine learning

[!INCLUDE big-data-cluster] enable AI and machine learning tasks on the data stored in HDFS storage pools and the data pools. You can use Spark as well as built-in AI tools in SQL Server using R, Python, Scala, or Java.

:::image type="content" source="media/big-data-cluster-overview/ai-ml-spark.png" alt-text="Diagram of AI and ML." lightbox="media/big-data-cluster-overview/ai-ml-spark.png":::

Management and monitoring

Management and monitoring are provided through a combination of command-line tools, APIs, portals, and dynamic management views.

You can use Azure Data Studio to perform a variety of tasks on the big data cluster:

  • Built-in snippets for common management tasks.
  • Ability to browse HDFS, upload files, preview files, and create directories.
  • Ability to create, open, and run Jupyter-compatible notebooks.
  • Data virtualization wizard to simplify the creation of external data sources (enabled by the Data Virtualization Extension).

Kubernetes concepts

A SQL Server big data cluster is a cluster of Linux containers orchestrated by Kubernetes.

Kubernetes is an open source container orchestrator, which can scale container deployments according to need. The following table defines some important Kubernetes terminology:

Term Description
Cluster A Kubernetes cluster is a set of machines, known as nodes. One node controls the cluster and is designated the master node; the remaining nodes are worker nodes. The Kubernetes master is responsible for distributing work between the workers, and for monitoring the health of the cluster.
Node A node runs containerized applications. It can be either a physical machine or a virtual machine. A Kubernetes cluster can contain a mixture of physical machine and virtual machine nodes.
Pod A pod is the atomic deployment unit of Kubernetes. A pod is a logical group of one or more containers-and associated resources-needed to run an application. Each pod runs on a node; a node can run one or more pods. The Kubernetes master automatically assigns pods to nodes in the cluster.

In [!INCLUDE big-data-cluster], Kubernetes is responsible for the state of the cluster. Kubernetes builds and configures the cluster nodes, assigns pods to nodes, and monitors the health of the cluster.

Related content