diff --git a/faq/data-lab.mdx b/faq/data-lab.mdx index 1651a99039..509b66bb48 100644 --- a/faq/data-lab.mdx +++ b/faq/data-lab.mdx @@ -5,27 +5,65 @@ meta: content: h1: Distributed Data Lab FAQ dates: - validation: 2025-02-06 + validation: 2025-02-18 category: managed-services productIcon: DistributedDataLabProductIcon --- -## What is Apache Spark? +## General + +### What workloads is Distributed Data Lab suited for? + +Distributed Data Lab supports a range of workloads, including: + +- Complex analytics. +- Machine learning tasks. +- High-speed operations on large datasets. + +It offers scalable CPU and GPU instances with flexible node limits, and robust Apache Spark library support. + +### What is Apache Spark? Apache Spark is an open-source unified analytics engine designed for large-scale data processing. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. Spark offers high-level APIs in Java, Scala, Python, and R, and an optimized engine that supports general execution graphs. -## How does Apache Spark work? +### How does Apache Spark work? Apache Spark processes data in memory, which allows it to perform tasks up to 100 times faster than traditional disk-based processing frameworks like [Hadoop MapReduce](https://fr.wikipedia.org/wiki/MapReduce). It uses Resilient Distributed Datasets (RDDs) to store data across multiple nodes in a cluster and perform parallel operations on this data. -## How am I billed for Distributed Data Lab? +### How am I billed for Distributed Data Lab? Distributed Data Lab is billed based on two factors: - the main node configuration selected - the worker node configuration selected, and the number of worker nodes in the cluster -## Can I upscale or downscale a Distributed Data Lab? +## Clusters + +### Can I upscale or downscale a Distributed Data Lab? + +Yes, you can upscale a Data Lab cluster to distribute your workloads across more worker nodes for faster processing. You can also scale it down to zero to reduce costs, while retaining your configuration and context. + +You can still access the notebook of a Data Lab cluster with zero worker nodes, but you cannot perform any calculations. You can resume the activity of your cluster by provisioning at least one worker node. + +### Can I run a Distributed Data Lab using GPUs? + +Yes, you can run your cluster on either CPUs or GPUs. Scaleway leverages Nvidia's [RAPIDS Accelerator For Apache Spark](https://www.nvidia.com/en-gb/deep-learning-ai/software/rapids/), an open-source suite of software libraries and APIs to execute end-to-end data science and analytics pipelines entirely on GPUs. This technology allows for significant acceleration of data processing tasks compared to CPU-based processing. + +## Storage + +### What data source options are available? + +Data Lab natively integrates with Scaleway Object Storage for reading and writing data, making it easy to process data directly from your buckets. Your buckets are accessible using the Scaleway console, or any other Amazon S3-compatible CLI tool. + +### Can I connect to S3 buckets from other cloud providers? + +Currently, connections are limited to Scaleway's Object Storage environment. + +## Notebook + +### What notebook is included with Dedicated Data Labs? + +The service provides a JupyterLab notebook running on a dedicated CPU instance, fully integrated with the Apache Spark cluster for seamless data processing and calculations. -Yes, you can upscale a Data Lab cluster to distribute your workloads across a greater number of worker nodes for faster processing. You can also scale it down to zero to reduce costs, while retaining your configuration and context. +### Can I connect my local JupyterLab to the Data Lab? -You can still access the notebook of a Data Lab cluster with zero worker nodes, but you cannot perform any calculation. You can resume the activity of your cluster by provisioning at least one worker node. \ No newline at end of file +Remote connections to a Data Lab cluster are currently not supported. diff --git a/pages/data-lab/concepts.mdx b/pages/data-lab/concepts.mdx index b68de70195..046791c080 100644 --- a/pages/data-lab/concepts.mdx +++ b/pages/data-lab/concepts.mdx @@ -24,6 +24,10 @@ A Distributed Data Lab is a data lab that is distributed across multiple worker A fixture is a set of data forming a request used for testing purposes. +## GPU + +GPUs (Graphical Processing Units) allow Apache Spark to accelerate computations for tasks that involve large-scale parallel processing, such as machine learning and specific data-analytics, significantly reducing the processing time for massive datasets and preparation for AI models. + ## JupyterLab JupyterLab is a web-based platform for interactive computing, letting you work with notebooks, code, and data all in one place. It builds on the classic Jupyter Notebook by offering a more flexible and integrated user interface, making it easier to handle various file formats and interactive components. diff --git a/pages/data-lab/how-to/connect-to-data-lab.mdx b/pages/data-lab/how-to/connect-to-data-lab.mdx index cfce883d4d..643e70a1c4 100644 --- a/pages/data-lab/how-to/connect-to-data-lab.mdx +++ b/pages/data-lab/how-to/connect-to-data-lab.mdx @@ -30,7 +30,7 @@ categories: 4. Enter your [API secret key](/iam/concepts/#api-key) when prompted for a password, then click **Log in**. You are directed to the lab's home screen. -5. In the files list on the left, double-click the `quickstart.ipynb` file to open it. +5. In the files list on the left, double-click the `DatalabDemo.ipynb` file to open it. 6. Update the first cell of the file with your API access key and secret key, as shown below: @@ -41,4 +41,4 @@ categories: Your notebook environment is now ready to be used. -7. Optionally, follow the instructions contained in the `quickstart.ipynb` file to process a test batch of data. \ No newline at end of file +7. Optionally, follow the instructions contained in the `DatalabDemo.ipynb` file to process a test batch of data. \ No newline at end of file diff --git a/pages/data-lab/quickstart.mdx b/pages/data-lab/quickstart.mdx index ecf51a09ef..5220b9d452 100644 --- a/pages/data-lab/quickstart.mdx +++ b/pages/data-lab/quickstart.mdx @@ -22,7 +22,7 @@ It is composed of the following: - Notebook: A JupyterLab service operating on a dedicated node type. -Scaleway provides dedicated node types for both the notebook and the cluster. The cluster nodes are high-end machines built for intensive computations, featuring numerous CPUs and substantial RAM. +Scaleway provides dedicated node types for both the notebook and the cluster. The cluster nodes are high-end machines built for intensive computations, featuring powerful CPUs/GPUs, and substantial RAM. The notebook, although capable of performing some local computations, primarily serves as a web interface for interacting with the Apache Spark cluster. @@ -41,12 +41,11 @@ The notebook, although capable of performing some local computations, primarily 3. Complete the following steps in the wizard: - Choose an Apache Spark version from the drop-down menu. - - Select a worker node configuration. + - Select a worker node configuration. For this procedure, we recommend selecting a CPU rather than a GPU. - Enter the desired number of worker nodes. Provisioning zero worker nodes lets you retain and access you cluster and notebook configurations, but will not allow you to run calculations. - - Optionally, choose an Object Storage bucket as your source of data and the place to store the output of your operations. - Enter a name for your Data Lab. - Optionally, add a description and/or tags for your Data Lab. - Verify the estimated cost. @@ -65,7 +64,7 @@ The notebook, although capable of performing some local computations, primarily ## How to run the demo file -Each Distributed Data Lab comes with a default `quickstart.ipynb` demo file for testing purposes. This file contains a preconfigured notebook environment that requires no modification to run. +Each Distributed Data Lab comes with a default `DatalabDemo.ipynb` demonstration file for testing purposes. This file contains a preconfigured notebook environment that requires no modification to run. Execute the cells in order to perform pre-determined operations on a dummy data set. @@ -81,7 +80,7 @@ Execute the cells in order to perform pre-determined operations on a dummy data "name": "My Spark", "conf":{ "spark.hadoop.fs.s3a.access.key": "your-api-access-key", - "spark.hadoop.fs.s3a.secret.key": "your-api-access-key", + "spark.hadoop.fs.s3a.secret.key": "your-api-secret-key", "spark.hadoop.fs.s3a.endpoint": "your-bucket-endpoint" } }