diff --git a/changelog/seqera-enterprise/v25.1.md b/changelog/seqera-enterprise/v25.1.md
index 51b75f226..006471145 100644
--- a/changelog/seqera-enterprise/v25.1.md
+++ b/changelog/seqera-enterprise/v25.1.md
@@ -28,7 +28,10 @@ Studios is Seqera's in-platform tool for secure, on-demand, interactive data ana
- Audit log update: Pipeline edit events are now logged.
- Switch AWS Batch compute environment dependencies to AWS SDK v2.
- Switch Compute dependencies to AWS SDK v2.
-- You can now upload custom icons when adding or updating a GitHub pipeline. If no user-uploaded icon is defined, the icon defined in the `manifest.icon` Nextflow config field is used by default. Otherwise, the GitHub organization avatar is used.
+- You can upload custom icons when adding or updating a pipeline. If no user-uploaded icon is defined, Platform will retrieve and attach a pipeline icon in the following order of precedence:
+ 1. A valid `icon` key:value pair defined in the `manifest` object of the `nextflow.config` file.
+ 2. The GitHub organization avatar (if the repository is hosted on GitHub).
+ 3. If none of the above are defined, Platform auto-generates and attaches a pipeline icon.
- New dynamic page title for easy bookmarking.
- Added `totalProcesses` to workflow progress responses.
- Implement collapsible view for JSON workflow parameters tab and add **View as YAML** option.
diff --git a/platform-cloud/docs/getting-started/quickstart-demo/add-pipelines.md b/platform-cloud/docs/getting-started/quickstart-demo/add-pipelines.md
index da0699924..9e8509577 100644
--- a/platform-cloud/docs/getting-started/quickstart-demo/add-pipelines.md
+++ b/platform-cloud/docs/getting-started/quickstart-demo/add-pipelines.md
@@ -44,10 +44,16 @@ To launch pipelines directly with CLI tools, select the **Launch Pipeline** tab
From your workspace Launchpad, select **Add Pipeline** and specify the following pipeline details:
-- (*Optional*) **Image**: Select the **Edit** icon on the pipeline image to open the **Edit image** window. From here, select **Upload file** to browse for an image file, or drag and drop the image file directly. Images must be in JPG or PNG format, with a maximum file size of 200 KB.
+- Optional: **Image**: Select the **Edit** icon on the pipeline image to open the **Edit image** window. From here, select **Upload file** to browse for an image file, or drag and drop the image file directly. Images must be in JPG or PNG format, with a maximum file size of 200 KB.
+ :::note
+ You can upload custom icons when adding or updating a pipeline. If no user-uploaded icon is defined, Platform will retrieve and attach a pipeline icon in the following order of precedence:
+ 1. A valid `icon` key:value pair defined in the `manifest` object of the `nextflow.config` file.
+ 2. The GitHub organization avatar (if the repository is hosted on GitHub).
+ 3. If none of the above are defined, Platform auto-generates and attaches a pipeline icon.
+ :::
- **Name**: A custom name of your choice. Pipeline names must be unique per workspace.
-- (*Optional*) **Description**: A summary of the pipeline or any information that may be useful to workspace participants when selecting a pipeline to launch.
-- (*Optional*) **Labels**: Categorize the pipeline according to arbitrary criteria (such research group or reference genome version) that may help workspace participants to select the appropriate pipeline for their analysis.
+- Optional: **Description**: A summary of the pipeline or any information that may be useful to workspace participants when selecting a pipeline to launch.
+- Optional: **Labels**: Categorize the pipeline according to arbitrary criteria (such research group or reference genome version) that may help workspace participants to select the appropriate pipeline for their analysis.
- **Compute environment**: Select an existing workspace [compute environment](../../compute-envs/overview).
- **Pipeline to launch**: The URL of any public or private Git repository that contains Nextflow source code.
- **Revision number**: Platform will search all of the available tags and branches in the provided pipeline repository and render a dropdown to select the appropriate version.
diff --git a/platform-enterprise_versioned_docs/version-23.2/getting-started/community-showcase.md b/platform-enterprise_versioned_docs/version-23.2/getting-started/community-showcase.md
deleted file mode 100644
index 21af8755d..000000000
--- a/platform-enterprise_versioned_docs/version-23.2/getting-started/community-showcase.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: "Tower community showcase"
-description: "Instructions to run your first pipeline in the Tower community showcase."
-date: "21 Apr 2023"
-tags: [pipeline, tutorial]
----
-
-The Tower community showcase is an example workspace provided by Seqera. The showcase is pre-configured with credentials, compute environments, and pipelines to get you running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute. Upon your first login to Tower Cloud, you are directed to the community showcase Launchpad. To run pipelines on your own infrastructure, create your own [organization](../orgs-and-teams/organizations) and [workspaces](../orgs-and-teams/workspace-management).
-
-## Launchpad
-
-The community showcase [Launchpad](../launch/launchpad) contains a list of pre-built community pipelines. A pipeline consists of a pre-configured workflow repository, compute environment, and launch parameters.
-
-## Datasets
-
-The community showcase contains a list of sample [datasets](../datasets/overview) under the **Datasets** tab. A dataset is a collection of versioned, structured data (usually in the form of a samplesheet) in CSV or TSV format. A dataset is used as the input for a pipeline run. Sample datasets are used in pipelines with the same name, e.g., the nf-core-rnaseq-test dataset is used as input when you run the nf-core-rnaseq pipeline.
-
-## Compute environments
-
-As of Tower version 23.1.3, the community showcase comes pre-loaded with two AWS Batch compute environments, which can be used to run the showcase pipelines. These environments come with 100 free CPU hours. A compute environment is the platform where workflows are executed. It is composed of access credentials, configuration settings, and storage options for the environment.
-
-## Credentials
-
-The community showcase includes all the [credentials](../credentials/overview) you need to run pipelines in showcase compute environments. Credentials in Tower are the authentication keys needed to access compute environments, private code repositories, and external services. Credentials in Tower are SHA-256 encrypted before secure storage.
-
-## Secrets
-
-The community showcase includes [pipeline secrets](../secrets/overview) that are retrieved and used during pipeline execution. In your own private or organization workspace, you can store the access keys, licenses, or passwords required for your pipeline execution to interact with third-party services.
-
-## Run pipeline with sample data
-
-1. From the [Launchpad](../launch/launchpad), select the pipeline of your choice to view the pipeline detail page. nf-core-rnaseq is a good first pipeline example.
-2. (Optional) Select the URL under **Workflow repository** to view the pipeline code repository in another tab.
-3. In Tower Cloud, select **Launch** from the pipeline detail page.
-4. On the **Launch pipeline** page, enter a unique **Workflow run name** or accept the pre-filled random name.
-5. (Optional) Enter labels to be assigned to the run in the **Labels** field.
-6. Under **Input/output options**, select the dataset named after your chosen pipeline from the drop-down menu under **input**.
-7. Under **outdir**, specify an output directory where run results will be saved. This must be an absolute path to storage on cloud infrastructure and defaults to `./results`.
-8. Under **email**, enter an email address where you wish to receive the run completion summary.
-9. Under **multiqc_title**, enter a title for the MultiQC report. This is used as both the report page header and filename.
-
-The remaining launch form fields will vary depending on the pipeline you have selected. Parameters required for the pipeline to run are pre-filled by default, and empty fields are optional.
-
-Once you have filled the necessary launch form details, select **Launch** from the bottom-right of the page. Tower directs you to the **Runs** tab, showing your new run in a **submitted** status on the top of the list. Select the run name to navigate to the run detail page and view the configuration, parameters, status of individual tasks, and run report.
-
-## Next steps
-
-To run workflows on your own infrastructure, or use workflows not included in the community showcase, create an [organization](../orgs-and-teams/organizations) and [workspaces](../orgs-and-teams/workspace-management).
diff --git a/platform-enterprise_versioned_docs/version-23.3/getting-started/community-showcase.md b/platform-enterprise_versioned_docs/version-23.3/getting-started/community-showcase.md
deleted file mode 100644
index bf5d28e05..000000000
--- a/platform-enterprise_versioned_docs/version-23.3/getting-started/community-showcase.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: "Seqera Community Showcase"
-description: "Instructions to run your first pipeline in the Community Showcase."
-date: "21 Apr 2023"
-tags: [pipeline, tutorial]
----
-
-The Community Showcase is an example workspace provided by Seqera. The showcase is pre-configured with credentials, compute environments, and pipelines so you can start running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute.
-
-## Run a pipeline with sample data
-
-The Community Showcase [Launchpad](../launch/launchpad) contains a list of pre-built community pipelines. A pipeline consists of a pre-configured workflow repository, compute environment, and launch parameters.
-
-## Datasets
-
-The community showcase contains a list of sample [datasets](../data/datasets) under the **Datasets** tab. A dataset is a collection of versioned, structured data (usually in the form of a samplesheet) in CSV or TSV format. A dataset is used as the input for a pipeline run. Sample datasets are used in pipelines with the same name, e.g., the _nf-core-rnaseq-test_ dataset is used as input when you run the _nf-core-rnaseq_ pipeline.
-
-## Compute environments
-
-From version 23.1.3, the Community Showcase comes pre-loaded with two AWS Batch compute environments, which can be used to run the showcase pipelines. These environments come with 100 free CPU hours. A compute environment is the platform where workflows are executed. It's composed of access credentials, configuration settings, and storage options for the environment.
-
-## Credentials
-
-The Community Showcase includes all the [credentials](../credentials/overview) you need to run pipelines in showcase compute environments. Credentials are the authentication keys you need to access compute environments, private code repositories, and external services. Credentials are SHA-256 encrypted before secure storage.
-
-## Secrets
-
-The Community Showcase includes [pipeline secrets](../secrets/overview) that are retrieved and used during pipeline execution. In your own private or organization workspace, you can store the access keys, licenses, or passwords required for your pipeline execution to interact with third-party services.
-
-## Run pipeline with sample data
-
-1. From the [Launchpad](../launch/launchpad), select a pipeline to view the pipeline detail page. _nf-core-rnaseq_ is a good first pipeline example.
-2. (Optional) Select the URL under **Workflow repository** to view the pipeline code repository in another tab.
-3. Select **Launch** from the pipeline detail page.
-4. On the **Launch pipeline** page, enter a unique **Workflow run name** or use the pre-filled random name.
-5. (Optional) Enter labels to be assigned to the run in the **Labels** field.
-6. Under **Input/output options**, select the dataset named after your chosen pipeline from the drop-down menu under **input**.
-7. Under **outdir**, specify an output directory where run results will be saved. This must be an absolute path to storage on cloud infrastructure and defaults to `./results`.
-8. Under **email**, enter an email address where you wish to receive the run completion summary.
-9. Under **multiqc_title**, enter a title for the MultiQC report. This is used as both the report page header and filename.
-
-The remaining launch form fields will vary depending on the pipeline you have selected. Parameters required for the pipeline to run are pre-filled by default, and empty fields are optional.
-
-Once you've filled the necessary launch form details, select **Launch**. The **Runs** tab will then be displayed, showing your new run in a **submitted** status on the top of the list. Select the run name to navigate to the run detail page and view the configuration, parameters, status of individual tasks, and run report.
-
-## Next steps
-
-To run workflows on your own infrastructure, or use workflows not included in the Community Showcase, create an [organization](../orgs-and-teams/organizations) and [workspaces](../orgs-and-teams/workspace-management).
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.md
similarity index 96%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.md
index 69d882c61..fafe84ff8 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-data.md
@@ -5,9 +5,6 @@ date: "21 Jul 2024"
tags: [platform, data, data explorer, datasets]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Most bioinformatics pipelines require an input of some sort. This is typically a samplesheet where each row consists of a sample, the location of files for that sample (such as FASTQ files), and other sample details. Reliable shared access to pipeline input data is crucial to simplify data management, minimize user data-input errors, and facilitate reproducible workflows.
In Platform, samplesheets and other data can be made easily accessible in one of two ways:
@@ -58,12 +55,12 @@ In Data Explorer, you can:

- **View bucket contents**:
- Select a bucket name from the list to view the bucket contents. The file type, size, and path of objects are displayed in columns next to the object name. For example, view the outputs of your [nf-core/rnaseq](./comm-showcase#launch-the-nf-corernaseq-pipeline) run:
+ Select a bucket name from the list to view the bucket contents. The file type, size, and path of objects are displayed in columns next to the object name. For example, view the outputs of an *nf-core/rnaseq* run:

- **Preview files**:
- Select a file to open a preview window that includes a **Download** button. For example, view the resultant gene counts of the salmon quantification step of your [nf-core/rnaseq](./comm-showcase#launch-the-nf-corernaseq-pipeline) run:
+ Select a file to open a preview window that includes a **Download** button. For example, view the resultant gene counts of the salmon quantification step of an *nf-core/rnaseq* run:

diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.md
index cd5e538c5..24f0c7368 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/add-pipelines.md
@@ -5,9 +5,6 @@ date: "12 Jul 2024"
tags: [platform, launch, pipelines, launchpad]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
The Launchpad lists the preconfigured Nextflow pipelines that can be executed on the [compute environments](../../compute-envs/overview) in your workspace.
Platform offers two methods to import pipelines to your workspace Launchpad — directly from Seqera Pipelines or manually via **Add pipeline** in Platform.
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.md
index 2e894069d..adf0f954a 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/automation.md
@@ -5,9 +5,6 @@ date: "21 Jul 2024"
tags: [platform, automation, api, cli, seqerakit]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Seqera Platform provides multiple methods of programmatic interaction to automate the execution of pipelines, chain pipelines together, and integrate Platform with third-party services.
### Platform API
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/comm-showcase.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/comm-showcase.mdx
deleted file mode 100644
index fdbfc882a..000000000
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/comm-showcase.mdx
+++ /dev/null
@@ -1,353 +0,0 @@
----
-title: "Explore Platform Cloud"
-description: "Seqera Platform Cloud demonstration walkthrough"
-date: "8 Jul 2024"
-tags: [platform, launch, pipelines, launchpad, showcase tutorial]
-toc_max_heading_level: 3
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-:::info
-This demo tutorial provides an introduction to Seqera Cloud, including instructions to:
-
-- Launch, monitor, and optimize the [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline
-- Select pipeline input data with [Data Explorer](../../data/data-explorer) and Platform [datasets](../../data/datasets)
-- Perform interactive analysis of pipeline results with [Data Studios](../../data_studios/index)
-
-The Community Showcase is a Seqera-managed demonstration workspace with all the resources needed to follow along with this tutorial. All [Seqera Cloud](https://cloud.seqera.io) users have access to this example workspace by default.
-:::
-
-The Launchpad in every workspace allows users to easily create and share Nextflow pipelines that can be executed on any supported infrastructure, including all public clouds and most HPC schedulers. A Launchpad pipeline consists of a pre-configured workflow repository, [compute environment](../../compute-envs/overview), and launch parameters.
-
-The Community Showcase contains 15 preconfigured pipelines, including [nf-core/rnaseq](https://github.com/nf-core/rnaseq), a bioinformatics pipeline used to analyze RNA sequencing data.
-
-The workspace also includes three preconfigured AWS Batch compute environments to run Community Showcase pipelines, and various datasets and public data sources (accessed via Data Explorer) to use as pipeline input.
-
-:::note
-To skip this Community Showcase demo and start running pipelines on your own infrastructure:
-1. Set up an [organization workspace](../workspace-setup).
-1. Create a workspace [compute environment](../../compute-envs/overview) for your cloud or HPC compute infrastructure.
-1. [Add pipelines](./add-pipelines) to your workspace.
-:::
-
-## Launch the nf-core/rnaseq pipeline
-
-:::note
-This guide is based on version 3.14.0 of the nf-core/rnaseq pipeline. Launch form parameters may differ in other versions.
-:::
-
-Navigate to the Launchpad in the `community/showcase` workspace and select **Launch** next to the `nf-core-rnaseq` pipeline to open the launch form.
-
- 
-
-The launch form consists of **General config**, **Run parameters**, and **Advanced options** sections to specify your run parameters before execution, and an execution summary. Use section headings or select the **Previous** and **Next** buttons at the bottom of the page to navigate between sections.
-
-
- Nextflow parameter schema
-
- The launch form lets you configure the pipeline execution. The pipeline parameters in this form are rendered from a [pipeline schema](../../pipeline-schema/overview) file in the root of the pipeline Git repository. `nextflow_schema.json` is a simple JSON-based schema describing pipeline parameters for pipeline developers to easily adapt their in-house Nextflow pipelines to be executed in Seqera Platform.
-
- :::tip
- See [Best Practices for Deploying Pipelines with the Seqera Platform](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) to learn how to build the parameter schema for any Nextflow pipeline automatically with tooling maintained by the nf-core community.
- :::
-
-
-
-### General config
-
-Most Community Showcase pipeline parameters are prefilled. Specify the following fields to identify your run among other workspace runs:
-
-- **Workflow run name**: A unique identifier for the run, pre-filled with a random name. This can be customized.
-- **Labels**: Assign new or existing labels to the run. For example, a project ID or genome version.
-
-### Run parameters
-
-There are three ways to enter **Run parameters** prior to launch:
-
-- The **Input form view** displays form fields to enter text, select attributes from dropdowns, and browse input and output locations with [Data Explorer](../../data/data-explorer).
-- The **Config view** displays a raw schema that you can edit directly. Select JSON or YAML format from the **View as** dropdown.
-- **Upload params file** allows you to upload a JSON or YAML file with run parameters.
-
-#### input
-
-Most nf-core pipelines use the `input` parameter in a standardized way to specify an input samplesheet that contains paths to input files (such as FASTQ files) and any additional metadata needed to run the pipeline. Use **Browse** to select either a file path in cloud storage via **Data Explorer**, or a pre-loaded **Dataset**:
-
-- In the **Data Explorer** tab, select the `nf-tower-data` bucket, then search for and select the `rnaseq_sample_data.csv` file.
-- In the **Datasets** tab, search for and select `rnaseq_sample_data`.
-
-
-
-:::tip
-See [Add data](./add-data) to learn how to add datasets and Data Explorer cloud buckets to your own workspaces.
-:::
-
-#### output
-
-Most nf-core pipelines use the `outdir` parameter in a standardized way to specify where the final results created by the pipeline are published. `outdir` must be unique for each pipeline run. Otherwise, your results will be overwritten.
-
-For this tutorial test run, keep the default `outdir` value (`./results`).
-
-:::tip
-For the `outdir` parameter in pipeline runs in your own workspace, select **Browse** to specify a cloud storage directory using Data Explorer, or enter a cloud storage directory path to publish pipeline results to manually.
-:::
-
-#### Pipeline-specific parameters
-
-Modify other parameters to customize the pipeline execution through the parameters form. For example, under **Read trimming options**, change the `trimmer` to select `fastp` in the dropdown menu instead of `trimgalore`.
-
-
-
-Select **Launch** to start the run and be directed to the **Runs** tab with your run in a **submitted** status at the top of the list.
-
-## View run information
-
-### Run details page
-
-As the pipeline runs, run details will populate with parameters, logs, and other important execution details:
-
-
- View run details
-
- - **Command-line**: The Nextflow command invocation used to run the pipeline. This contains details about the pipeline version (`-r 3.14.0` flag) and profile, if specified (`-profile test` flag).
- - **Parameters**: The exact set of parameters used in the execution. This is helpful for reproducing the results of a previous run.
- - **Resolved Nextflow configuration**: The full Nextflow configuration settings used for the run. This includes parameters, but also settings specific to task execution (such as memory, CPUs, and output directory).
- - **Execution Log**: A summarized Nextflow log providing information about the pipeline and the status of the run.
- - **Datasets**: Link to datasets, if any were used in the run.
- - **Reports**: View pipeline outputs directly in the Platform.
-
- 
-
-
-
-### View reports
-
-Most Nextflow pipelines generate reports or output files which are useful to inspect at the end of the pipeline execution. Reports can contain quality control (QC) metrics that are important to assess the integrity of the results.
-
-
- View run reports
-
-
- 
-
- For example, for the nf-core/rnaseq pipeline, view the [MultiQC](https://docs.seqera.io/multiqc) report generated. MultiQC is a helpful reporting tool to generate aggregate statistics and summaries from bioinformatics tools.
-
- 
-
- The paths to report files point to a location in cloud storage (in the `outdir` directory specified during launch), but you can view the contents directly and download each file without navigating to the cloud or a remote filesystem.
-
- #### Specify outputs in reports
-
- To customize and instruct Platform where to find reports generated by the pipeline, a [tower.yml](https://github.com/nf-core/rnaseq/blob/master/tower.yml) file that contains the locations of the generated reports must be included in the pipeline repository.
-
- In the nf-core/rnaseq pipeline, the MULTIQC process step generates a MultiQC report file in HTML format:
-
- ```yaml
- reports:
- multiqc_report.html:
- display: "MultiQC HTML report"
- ```
-
-
-
-:::note
-See [Reports](../../reports/overview) to configure reports for pipeline runs in your own workspace.
-:::
-
-### View general information
-
-The run details page includes general information about who executed the run and when, the Git hash and tag used, and additional details about the compute environment and Nextflow version used.
-
-
- View general run information
-
- 
-
- The **General** panel displays top-level information about a pipeline run:
-
- - Unique workflow run ID
- - Workflow run name
- - Timestamp of pipeline start
- - Pipeline version and Git commit ID
- - Nextflow session ID
- - Username of the launcher
- - Work directory path
-
-
-
-### View process and task details
-
-Scroll down the page to view:
-
-- The progress of individual pipeline **Processes**
-- **Aggregated stats** for the run (total walltime, CPU hours)
-- **Workflow metrics** (CPU efficiency, memory efficiency)
-- A **Task details** table for every task in the workflow
-
-The task details table provides further information on every step in the pipeline, including task statuses and metrics:
-
-
- View task details
-
- Select a task in the task table to open the **Task details** dialog. The dialog has three tabs: **About**, **Execution log**, and **Data Explorer**.
-
- #### About
-
- The **About** tab includes:
-
- 1. **Name**: Process name and tag
- 2. **Command**: Task script, defined in the pipeline process
- 3. **Status**: Exit code, task status, and number of attempts
- 4. **Work directory**: Directory where the task was executed
- 5. **Environment**: Environment variables that were supplied to the task
- 6. **Execution time**: Metrics for task submission, start, and completion time
- 7. **Resources requested**: Metrics for the resources requested by the task
- 8. **Resources used**: Metrics for the resources used by the task
-
- 
-
- #### Execution log
-
- The **Execution log** tab provides a real-time log of the selected task's execution. Task execution and other logs (such as stdout and stderr) are available for download from here, if still available in your compute environment.
-
-
-
-### Task work directory in Data Explorer
-
-If a task fails, a good place to begin troubleshooting is the task's work directory. Nextflow hash-addresses each task of the pipeline and creates unique directories based on these hashes.
-
-
- View task log and output files
-
- Instead of navigating through a bucket on the cloud console or filesystem, use the **Data Explorer** tab in the Task window to view the work directory.
-
- Data Explorer allows you to view the log files and output files generated for each task, directly within Platform. You can view, download, and retrieve the link for these intermediate files to simplify troubleshooting.
-
- 
-
-
-
-## Interactive analysis
-
-Interactive analysis of pipeline results is often performed in platforms like Jupyter Notebook or RStudio. Setting up the infrastructure for these platforms, including accessing pipeline data and the necessary bioinformatics packages, can be complex and time-consuming.
-
-**Data Studios** streamlines the process of creating interactive analysis environments for Platform users. With built-in templates, creating a data studio is as simple as adding and sharing pipelines or datasets.
-
-### Analyze RNAseq data in Data Studios
-
-In the **Data Studios** tab, you can monitor and see the details of the data studios in the Community Showcase workspace.
-
-Data Studios is used to perform bespoke analysis on the results of upstream workflows. For example, in the Community Showcase workspace we have run the **nf-core/rnaseq** pipeline to quantify gene expression, followed by **nf-core/differentialabundance** to derive differential expression statistics. The workspace contains a data studio with these results from cloud storage mounted into the studio to perform further analysis. One of these outputs is an RShiny application, which can be deployed for interactive analysis.
-
-#### Connect to the RNAseq analysis studio
-
-Select the `rnaseq_to_differentialabundance` data studio. This studio consists of an RStudio environment that uses an existing compute environment available in the showcase workspace. The studio also contains mounted data generated from the nf-core/rnaseq and subsequent nf-core/differentialabundance pipeline runs, directly from AWS S3.
-
-
-
-Select **Connect** to view the running RStudio environment. The `rnaseq_to_differentialabundance` studio includes the necessary R packages for deploying an RShiny application to visualize the RNAseq data.
-
-Deploy the RShiny app in the data studio by selecting the green play button on the last chunk of the R script:
-
-
-
-:::note
-Data Studios allows you to specify the resources each studio will use. When [creating your own data studios](../../data_studios/index) with shared compute environment resources, you must allocate sufficient resources to the compute environment to prevent data studio or pipeline run interruptions.
-:::
-
-### Explore results
-
-The RShiny app will deploy in a separate browser window, providing a data interface. Here you can view information about your sample data, perform QC or exploratory analysis, and view the results of differential expression analyses.
-
-
-
-
- Sample clustering with PCA plots
-
- In the **QC/Exploratory** tab, select the PCA (Principal Component Analysis) plot to visualize how the samples group together based on their gene expression profiles.
-
- In this example, we used RNA sequencing data from the publicly-available ENCODE project, which includes samples from four different cell lines:
-
- - **GM12878** — a lymphoblastoid cell line
- - **K562** — a chronic myelogenous leukemia cell line
- - **MCF-7** — a breast cancer cell line
- - **H1-hESC** — human embryonic stem cells
-
- What to look for in the PCA plot:
-
- - **Replicate clustering**: Ideally, biological replicates of the same cell type should cluster closely together. For example, replicates of MCF-7 (breast cancer cell line) group together. This indicates consistent gene expression profiles among biological replicates.
- - **Cell type separation**: Different cell types should form distinct clusters. For instance, GM12878, K562, MCF-7, and H1-hESC samples should each form their own separate clusters, reflecting their unique gene expression patterns.
-
- From this PCA plot, you can gain insights into the consistency and quality of your sequencing data, identify any potential issues, and understand the major sources of variation among your samples - all directly in Platform.
-
- 
-
-
-
-
- Gene expression changes with Volcano plots
-
- In the **Differential** tab, select **Volcano plots** to compare genes with significant changes in expression between two samples. For example, filter for `Type: H1 vs MCF-7` to view the differences in expression between these two cell lines.
-
- 1. **Identify upregulated and downregulated genes**: The x-axis of the volcano plot represents the log2 fold change in gene expression between the H1 and MCF-7 samples, while the y-axis represents the statistical significance of the changes.
-
- - **Upregulated genes in MCF-7**: Genes on the left side of the plot (negative fold change) are upregulated in the MCF-7 samples compared to H1. For example, the SHH gene, which is known to be upregulated in cancer cell lines, prominently appears here.
-
- 2. **Filtering for specific genes**: If you are interested in specific genes, use the filter function. For example, filter for the SHH gene in the table below the plot. This allows you to quickly locate and examine this gene in more detail.
-
- 3. **Gene expression bar plot**: After filtering for the SHH gene, select it to navigate to a gene expression bar plot. This plot will show you the expression levels of SHH across all samples, allowing you to see in which samples it is most highly expressed.
-
- - Here, SHH is most highly expressed in MCF-7, which aligns with its known role in cancer cell proliferation.
-
- Using the volcano plot, you can effectively identify and explore the genes with the most significant changes in expression between your samples, providing a deeper understanding of the molecular differences.
-
- 
-
-
-
-### Collaborate in the data studio
-
-To share the results of your RNAseq analysis or allow colleagues to perform exploratory analysis, share a link to the data studio by selecting the options menu for the data studio you want to share, then select **Copy data studio URL**. With this link, other authenticated users with the **Connect** [role](../../orgs-and-teams/roles) (or greater) can access the session directly.
-
-:::note
-See [Data Studios](../../data_studios/index) to learn how to create data studios in your own workspace.
-:::
-
-## Pipeline optimization
-
-Seqera's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
-
-However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
-
-Pipeline optimization analyzes resource usage data from previous runs to optimize the resource allocation for future runs. After a successful run, optimization becomes available, indicated by the lightbulb icon next to the pipeline turning black.
-
-
- Optimize nf-core/rnaseq
-
- Navigate back to the Launchpad and select the lightbulb icon next to the nf-core/rnaseq pipeline to view the optimized profile. You have the flexibility to tailor the optimization's target settings and incorporate a retry strategy as needed.
-
- #### View optimized configuration
-
- When you select the lightbulb, you can access an optimized configuration profile in the second tab of the **Customize optimization profile** window.
-
- This profile consists of Nextflow configuration settings for each process and each resource directive (where applicable): **cpus**, **memory**, and **time**. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.
-
- Once optimization is selected, subsequent runs of that pipeline will inherit the optimized configuration profile, indicated by the black lightbulb icon with a checkmark.
-
- :::note
- Optimization profiles are generated from one run at a time, defaulting to the most recent run, and _not_ an aggregation of previous runs.
- :::
-
- 
-
- Verify the optimized configuration of a given run by inspecting the resource usage plots for that run and these fields in the run's task table:
-
- | Description | Key |
- | ------------ | ---------------------- |
- | CPU usage | `pcpu` |
- | Memory usage | `peakRss` |
- | Runtime | `start` and `complete` |
-
-
-
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.md
index 98e4cb957..d1e0037f7 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/data-studios.md
@@ -2,12 +2,9 @@
title: "Data Studios"
description: "An introduction to Data Studios in Seqera Platform"
date: "8 Jul 2024"
-tags: [platform, data, data studios]
+tags: [platform, studios]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
:::info
This guide provides an introduction to Data Studios using a demo studio in the Community Showcase workspace. See [Data Studios](../../data_studios/index) to learn how to create data studios in your own workspace.
:::
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.md
index 4ffc3cdae..e33272298 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, launch, pipelines, launchpad, showcase tutorial]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
:::info
This tutorial provides an introduction to launching pipelines in Seqera Platform.
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.md
similarity index 95%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.md
index 55148ef18..0808a6078 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/monitor-runs.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, monitoring]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
There are several ways to monitor pipeline runs in Seqera Platform:
### Workspace view
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.md
similarity index 97%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.md
index cd67be592..eaed535cd 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/pipeline-optimization.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipeline optimization]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.mdx b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.mdx
rename to platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.md
index ca4a15bba..a57d2f4e6 100644
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.mdx
+++ b/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipelines, monitoring, showcase tutorial]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
When you launch a pipeline, you are directed to the **Runs** tab which contains all executed workflows, with your submitted run at the top of the list.
Each new or resumed run is given a random name, which can be customized prior to launch. Each row corresponds to a specific run. As a job executes, it can transition through the following states:
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.md
index 69d882c61..7ca1900a1 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-data.md
@@ -5,9 +5,6 @@ date: "21 Jul 2024"
tags: [platform, data, data explorer, datasets]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Most bioinformatics pipelines require an input of some sort. This is typically a samplesheet where each row consists of a sample, the location of files for that sample (such as FASTQ files), and other sample details. Reliable shared access to pipeline input data is crucial to simplify data management, minimize user data-input errors, and facilitate reproducible workflows.
In Platform, samplesheets and other data can be made easily accessible in one of two ways:
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.md
similarity index 91%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.md
index 3a7a36148..390eef3d1 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/add-pipelines.md
@@ -5,9 +5,6 @@ date: "12 Jul 2024"
tags: [platform, launch, pipelines, launchpad]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
The Launchpad lists the preconfigured Nextflow pipelines that can be executed on the [compute environments](../../compute-envs/overview) in your workspace.
Platform offers two methods to import pipelines to your workspace Launchpad — directly from Seqera Pipelines or manually via **Add pipeline** in Platform.
@@ -45,6 +42,12 @@ To launch pipelines directly with CLI tools, select the **Launch Pipeline** tab
From your workspace Launchpad, select **Add Pipeline** and specify the following pipeline details:
- (*Optional*) **Image**: Select the **Edit** icon on the pipeline image to open the **Edit image** window. From here, select **Upload file** to browse for an image file, or drag and drop the image file directly. Images must be in JPG or PNG format, with a maximum file size of 200 KB.
+ :::note
+ You can upload custom icons when adding or updating a pipeline. If no user-uploaded icon is defined, Platform will retrieve and attach a pipeline icon in the following order of precedence:
+ 1. A valid icon key:value pair defined in the "manifest" object of the nextflow.config file
+ 2. The Github organization avatar (if the repository is hosted on Github)
+ 3. If none of the above are defined, Platform auto-generates and attaches a pipeline icon.
+ :::
- **Name**: A custom name of your choice. Pipeline names must be unique per workspace.
- (*Optional*) **Description**: A summary of the pipeline or any information that may be useful to workspace participants when selecting a pipeline to launch.
- (*Optional*) **Labels**: Categorize the pipeline according to arbitrary criteria (such research group or reference genome version) that may help workspace participants to select the appropriate pipeline for their analysis.
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.md
index 2e894069d..adf0f954a 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/automation.md
@@ -5,9 +5,6 @@ date: "21 Jul 2024"
tags: [platform, automation, api, cli, seqerakit]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Seqera Platform provides multiple methods of programmatic interaction to automate the execution of pipelines, chain pipelines together, and integrate Platform with third-party services.
### Platform API
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/comm-showcase.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/comm-showcase.mdx
deleted file mode 100644
index fb3a86a24..000000000
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/comm-showcase.mdx
+++ /dev/null
@@ -1,352 +0,0 @@
----
-title: "Explore Platform Cloud"
-description: "Seqera Platform Cloud demonstration walkthrough"
-date: "8 Jul 2024"
-tags: [platform, launch, pipelines, launchpad, showcase tutorial]
-toc_max_heading_level: 3
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-:::info
-This demo tutorial provides an introduction to Seqera Platform, including instructions to:
-- Launch, monitor, and optimize the [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline
-- Select pipeline input data with [Data Explorer](../../data/data-explorer) and Platform [datasets](../../data/datasets)
-- Perform interactive analysis of pipeline results with [Data Studios](../../data_studios/index)
-
-The Platform Community Showcase is a Seqera-managed demonstration workspace with all the resources needed to follow along with this tutorial. All [Seqera Cloud](https://cloud.seqera.io) users have access to this example workspace by default.
-:::
-
-The Launchpad in every Platform workspace allows users to easily create and share Nextflow pipelines that can be executed on any supported infrastructure, including all public clouds and most HPC schedulers. A Launchpad pipeline consists of a pre-configured workflow repository, [compute environment](../../compute-envs/overview), and launch parameters.
-
-The Community Showcase contains 15 preconfigured pipelines, including [nf-core/rnaseq](https://github.com/nf-core/rnaseq), a bioinformatics pipeline used to analyze RNA sequencing data.
-
-The workspace also includes three preconfigured AWS Batch compute environments to run Showcase pipelines, and various Platform datasets and public data sources (accessed via Data Explorer) to use as pipeline input.
-
-:::note
-To skip this Community Showcase demo and start running pipelines on your own infrastructure:
-1. Set up an [organization workspace](../workspace-setup).
-1. Create a workspace [compute environment](../../compute-envs/overview) for your cloud or HPC compute infrastructure.
-1. [Add pipelines](./add-pipelines) to your workspace.
-:::
-
-## Launch the nf-core/rnaseq pipeline
-
-:::note
-This guide is based on version 3.14.0 of the nf-core/rnaseq pipeline. Launch form parameters may differ in other versions.
-:::
-
-Navigate to the Launchpad in the `community/showcase` workspace and select **Launch** next to the `nf-core-rnaseq` pipeline to open the launch form.
-
- 
-
-The launch form consists of **General config**, **Run parameters**, and **Advanced options** sections to specify your run parameters before execution, and an execution summary. Use section headings or select the **Previous** and **Next** buttons at the bottom of the page to navigate between sections.
-
-
- Nextflow parameter schema
-
- The launch form lets you configure the pipeline execution. The pipeline parameters in this form are rendered from a [pipeline schema](../../pipeline-schema/overview) file in the root of the pipeline Git repository. `nextflow_schema.json` is a simple JSON-based schema describing pipeline parameters for pipeline developers to easily adapt their in-house Nextflow pipelines to be executed in Platform.
-
- :::tip
- See [Best Practices for Deploying Pipelines with the Seqera Platform](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) to learn how to build the parameter schema for any Nextflow pipeline automatically with tooling maintained by the nf-core community.
- :::
-
-
-
-### General config
-
-Most Showcase pipeline parameters are prefilled. Specify the following fields to identify your run amongst other workspace runs:
-
-- **Workflow run name**: A unique identifier for the run, pre-filled with a random name. This can be customized.
-- **Labels**: Assign new or existing labels to the run. For example, a project ID or genome version.
-
-### Run parameters
-
-There are three ways to enter **Run parameters** prior to launch:
-
-- The **Input form view** displays form fields to enter text, select attributes from dropdowns, and browse input and output locations with [Data Explorer](../../data/data-explorer).
-- The **Config view** displays a raw schema that you can edit directly. Select JSON or YAML format from the **View as** dropdown.
-- **Upload params file** allows you to upload a JSON or YAML file with run parameters.
-
-#### input
-
-Most nf-core pipelines use the `input` parameter in a standardized way to specify an input samplesheet that contains paths to input files (such as FASTQ files) and any additional metadata needed to run the pipeline. Use **Browse** to select either a file path in cloud storage via **Data Explorer**, or a pre-loaded **Dataset**:
-
-- In the **Data Explorer** tab, select the `nf-tower-data` bucket, then search for and select the `rnaseq_sample_data.csv` file.
-- In the **Datasets** tab, search for and select `rnaseq_sample_data`.
-
-
-
-:::tip
-See [Add data](./add-data) to learn how to add datasets and Data Explorer cloud buckets to your own workspaces.
-:::
-
-#### output
-
-Most nf-core pipelines use the `outdir` parameter in a standardized way to specify where the final results created by the pipeline are published. `outdir` must be unique for each pipeline run. Otherwise, your results will be overwritten.
-
-For this tutorial test run, keep the default `outdir` value (`./results`).
-
-:::tip
-For the `outdir` parameter in pipeline runs in your own workspace, select **Browse** to specify a cloud storage directory using Data Explorer, or enter a cloud storage directory path to publish pipeline results to manually.
-:::
-
-#### Pipeline-specific parameters
-
-Modify other parameters to customize the pipeline execution through the parameters form. For example, under **Read trimming options**, change the `trimmer` to select `fastp` in the dropdown menu instead of `trimgalore`.
-
-
-
-Select **Launch** to start the run and be directed to the **Runs** tab with your run in a **submitted** status at the top of the list.
-
-## View run information
-
-### Run details page
-
-As the pipeline runs, run details will populate with parameters, logs, and other important execution details:
-
-
- View run details
-
- - **Command-line**: The Nextflow command invocation used to run the pipeline. This contains details about the pipeline version (`-r 3.14.0` flag) and profile, if specified (`-profile test` flag).
- - **Parameters**: The exact set of parameters used in the execution. This is helpful for reproducing the results of a previous run.
- - **Resolved Nextflow configuration**: The full Nextflow configuration settings used for the run. This includes parameters, but also settings specific to task execution (such as memory, CPUs, and output directory).
- - **Execution Log**: A summarized Nextflow log providing information about the pipeline and the status of the run.
- - **Datasets**: Link to datasets, if any were used in the run.
- - **Reports**: View pipeline outputs directly in the Platform.
-
- 
-
-
-
-### View reports
-
-Most Nextflow pipelines generate reports or output files which are useful to inspect at the end of the pipeline execution. Reports can contain quality control (QC) metrics that are important to assess the integrity of the results.
-
-
- View run reports
-
-
- 
-
- For example, for the nf-core/rnaseq pipeline, view the [MultiQC](https://docs.seqera.io/multiqc) report generated. MultiQC is a helpful reporting tool to generate aggregate statistics and summaries from bioinformatics tools.
-
- 
-
- The paths to report files point to a location in cloud storage (in the `outdir` directory specified during launch), but you can view the contents directly and download each file without navigating to the cloud or a remote filesystem.
-
- #### Specify outputs in reports
-
- To customize and instruct Platform where to find reports generated by the pipeline, a [tower.yml](https://github.com/nf-core/rnaseq/blob/master/tower.yml) file that contains the locations of the generated reports must be included in the pipeline repository.
-
- In the nf-core/rnaseq pipeline, the MULTIQC process step generates a MultiQC report file in HTML format:
-
- ```yaml
- reports:
- multiqc_report.html:
- display: "MultiQC HTML report"
- ```
-
-
-
-:::note
-See [Reports](../../reports/overview) to configure reports for pipeline runs in your own workspace.
-:::
-
-### View general information
-
-The run details page includes general information about who executed the run and when, the Git hash and tag used, and additional details about the compute environment and Nextflow version used.
-
-
- View general run information
-
- 
-
- The **General** panel displays top-level information about a pipeline run:
-
- - Unique workflow run ID
- - Workflow run name
- - Timestamp of pipeline start
- - Pipeline version and Git commit ID
- - Nextflow session ID
- - Username of the launcher
- - Work directory path
-
-
-
-### View process and task details
-
-Scroll down the page to view:
-
-- The progress of individual pipeline **Processes**
-- **Aggregated stats** for the run (total walltime, CPU hours)
-- **Workflow metrics** (CPU efficiency, memory efficiency)
-- A **Task details** table for every task in the workflow
-
-The task details table provides further information on every step in the pipeline, including task statuses and metrics:
-
-
- View task details
-
- Select a task in the task table to open the **Task details** dialog. The dialog has three tabs: **About**, **Execution log**, and **Data Explorer**.
-
- #### About
-
- The **About** tab includes:
-
- 1. **Name**: Process name and tag
- 2. **Command**: Task script, defined in the pipeline process
- 3. **Status**: Exit code, task status, and number of attempts
- 4. **Work directory**: Directory where the task was executed
- 5. **Environment**: Environment variables that were supplied to the task
- 6. **Execution time**: Metrics for task submission, start, and completion time
- 7. **Resources requested**: Metrics for the resources requested by the task
- 8. **Resources used**: Metrics for the resources used by the task
-
- 
-
- #### Execution log
-
- The **Execution log** tab provides a real-time log of the selected task's execution. Task execution and other logs (such as stdout and stderr) are available for download from here, if still available in your compute environment.
-
-
-
-### Task work directory in Data Explorer
-
-If a task fails, a good place to begin troubleshooting is the task's work directory. Nextflow hash-addresses each task of the pipeline and creates unique directories based on these hashes.
-
-
- View task log and output files
-
- Instead of navigating through a bucket on the cloud console or filesystem, use the **Data Explorer** tab in the Task window to view the work directory.
-
- Data Explorer allows you to view the log files and output files generated for each task, directly within Platform. You can view, download, and retrieve the link for these intermediate files to simplify troubleshooting.
-
- 
-
-
-
-## Interactive analysis
-
-Interactive analysis of pipeline results is often performed in platforms like Jupyter Notebook or RStudio. Setting up the infrastructure for these platforms, including accessing pipeline data and the necessary bioinformatics packages, can be complex and time-consuming.
-
-**Data Studios** streamlines the process of creating interactive analysis environments for Platform users. With built-in templates, creating a data studio is as simple as adding and sharing pipelines or datasets.
-
-### Analyze RNAseq data in Data Studios
-
-In the **Data Studios** tab, you can monitor and see the details of the data studios in the Community Showcase workspace.
-
-Data Studios is used to perform bespoke analysis on the results of upstream workflows. For example, in the Community Showcase workspace we have run the **nf-core/rnaseq** pipeline to quantify gene expression, followed by **nf-core/differentialabundance** to derive differential expression statistics. The workspace contains a data studio with these results from cloud storage mounted into the studio to perform further analysis. One of these outputs is an RShiny application, which can be deployed for interactive analysis.
-
-#### Connect to the RNAseq analysis studio
-
-Select the `rnaseq_to_differentialabundance` data studio. This studio consists of an RStudio environment that uses an existing compute environment available in the showcase workspace. The studio also contains mounted data generated from the nf-core/rnaseq and subsequent nf-core/differentialabundance pipeline runs, directly from AWS S3.
-
-
-
-Select **Connect** to view the running RStudio environment. The `rnaseq_to_differentialabundance` studio includes the necessary R packages for deploying an RShiny application to visualize the RNAseq data.
-
-Deploy the RShiny app in the data studio by selecting the green play button on the last chunk of the R script:
-
-
-
-:::note
-Data Studios allows you to specify the resources each studio will use. When [creating your own data studios](../../data_studios/index) with shared compute environment resources, you must allocate sufficient resources to the compute environment to prevent data studio or pipeline run interruptions.
-:::
-
-### Explore results
-
-The RShiny app will deploy in a separate browser window, providing a data interface. Here you can view information about your sample data, perform QC or exploratory analysis, and view the results of differential expression analyses.
-
-
-
-
- Sample clustering with PCA plots
-
- In the **QC/Exploratory** tab, select the PCA (Principal Component Analysis) plot to visualize how the samples group together based on their gene expression profiles.
-
- In this example, we used RNA sequencing data from the publicly-available ENCODE project, which includes samples from four different cell lines:
-
- - **GM12878** — a lymphoblastoid cell line
- - **K562** — a chronic myelogenous leukemia cell line
- - **MCF-7** — a breast cancer cell line
- - **H1-hESC** — human embryonic stem cells
-
- What to look for in the PCA plot:
-
- - **Replicate clustering**: Ideally, biological replicates of the same cell type should cluster closely together. For example, replicates of MCF-7 (breast cancer cell line) group together. This indicates consistent gene expression profiles among biological replicates.
- - **Cell type separation**: Different cell types should form distinct clusters. For instance, GM12878, K562, MCF-7, and H1-hESC samples should each form their own separate clusters, reflecting their unique gene expression patterns.
-
- From this PCA plot, you can gain insights into the consistency and quality of your sequencing data, identify any potential issues, and understand the major sources of variation among your samples - all directly in Platform.
-
- 
-
-
-
-
- Gene expression changes with Volcano plots
-
- In the **Differential** tab, select **Volcano plots** to compare genes with significant changes in expression between two samples. For example, filter for `Type: H1 vs MCF-7` to view the differences in expression between these two cell lines.
-
- 1. **Identify upregulated and downregulated genes**: The x-axis of the volcano plot represents the log2 fold change in gene expression between the H1 and MCF-7 samples, while the y-axis represents the statistical significance of the changes.
-
- - **Upregulated genes in MCF-7**: Genes on the left side of the plot (negative fold change) are upregulated in the MCF-7 samples compared to H1. For example, the SHH gene, which is known to be upregulated in cancer cell lines, prominently appears here.
-
- 2. **Filtering for specific genes**: If you are interested in specific genes, use the filter function. For example, filter for the SHH gene in the table below the plot. This allows you to quickly locate and examine this gene in more detail.
-
- 3. **Gene expression bar plot**: After filtering for the SHH gene, select it to navigate to a gene expression bar plot. This plot will show you the expression levels of SHH across all samples, allowing you to see in which samples it is most highly expressed.
-
- - Here, SHH is most highly expressed in MCF-7, which aligns with its known role in cancer cell proliferation.
-
- Using the volcano plot, you can effectively identify and explore the genes with the most significant changes in expression between your samples, providing a deeper understanding of the molecular differences.
-
- 
-
-
-
-### Collaborate in the data studio
-
-To share the results of your RNAseq analysis or allow colleagues to perform exploratory analysis, share a link to the data studio by selecting the options menu for the data studio you want to share, then select **Copy data studio URL**. With this link, other authenticated users with the **Connect** [role](../../orgs-and-teams/roles) (or greater) can access the session directly.
-
-:::note
-See [Data Studios](../../data_studios/index) to learn how to create data studios in your own workspace.
-:::
-
-## Pipeline optimization
-
-Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
-
-However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
-
-Pipeline optimization analyzes resource usage data from previous runs to optimize the resource allocation for future runs. After a successful run, optimization becomes available, indicated by the lightbulb icon next to the pipeline turning black.
-
-
- Optimize nf-core/rnaseq
-
- Navigate back to the Launchpad and select the lightbulb icon next to the nf-core/rnaseq pipeline to view the optimized profile. You have the flexibility to tailor the optimization's target settings and incorporate a retry strategy as needed.
-
- #### View optimized configuration
-
- When you select the lightbulb, you can access an optimized configuration profile in the second tab of the **Customize optimization profile** window.
-
- This profile consists of Nextflow configuration settings for each process and each resource directive (where applicable): **cpus**, **memory**, and **time**. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.
-
- Once optimization is selected, subsequent runs of that pipeline will inherit the optimized configuration profile, indicated by the black lightbulb icon with a checkmark.
-
- :::note
- Optimization profiles are generated from one run at a time, defaulting to the most recent run, and _not_ an aggregation of previous runs.
- :::
-
- 
-
- Verify the optimized configuration of a given run by inspecting the resource usage plots for that run and these fields in the run's task table:
-
- | Description | Key |
- | ------------ | ---------------------- |
- | CPU usage | `pcpu` |
- | Memory usage | `peakRss` |
- | Runtime | `start` and `complete` |
-
-
-
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.md
index 98e4cb957..c66be9e23 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/data-studios.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, data, data studios]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
:::info
This guide provides an introduction to Data Studios using a demo studio in the Community Showcase workspace. See [Data Studios](../../data_studios/index) to learn how to create data studios in your own workspace.
:::
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.md
index 4ffc3cdae..e33272298 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/launch-pipelines.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/launch-pipelines.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, launch, pipelines, launchpad, showcase tutorial]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
:::info
This tutorial provides an introduction to launching pipelines in Seqera Platform.
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.md
similarity index 95%
rename from platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.md
index 55148ef18..0808a6078 100644
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, monitoring]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
There are several ways to monitor pipeline runs in Seqera Platform:
### Workspace view
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.md
similarity index 97%
rename from platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.md
index cd67be592..eaed535cd 100644
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipeline optimization]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.mdx b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.mdx
rename to platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.md
index ca4a15bba..a57d2f4e6 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.mdx
+++ b/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/view-run-information.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipelines, monitoring, showcase tutorial]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
When you launch a pipeline, you are directed to the **Runs** tab which contains all executed workflows, with your submitted run at the top of the list.
Each new or resumed run is given a random name, which can be customized prior to launch. Each row corresponds to a specific run. As a job executes, it can transition through the following states:
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/add-pipelines.md b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/add-pipelines.md
index da0699924..a9914436c 100644
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/add-pipelines.md
+++ b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/add-pipelines.md
@@ -45,9 +45,15 @@ To launch pipelines directly with CLI tools, select the **Launch Pipeline** tab
From your workspace Launchpad, select **Add Pipeline** and specify the following pipeline details:
- (*Optional*) **Image**: Select the **Edit** icon on the pipeline image to open the **Edit image** window. From here, select **Upload file** to browse for an image file, or drag and drop the image file directly. Images must be in JPG or PNG format, with a maximum file size of 200 KB.
+ :::note
+ You can upload custom icons when adding or updating a pipeline. If no user-uploaded icon is defined, Platform will retrieve and attach a pipeline icon in the following order of precedence:
+ 1. A valid icon `key:value` pair defined in the `manifest` object of the `nextflow.config` file.
+ 2. The GitHub organization avatar (if the repository is hosted on GitHub).
+ 3. If none of the above are defined, Platform auto-generates and attaches a pipeline icon.
+ :::
- **Name**: A custom name of your choice. Pipeline names must be unique per workspace.
-- (*Optional*) **Description**: A summary of the pipeline or any information that may be useful to workspace participants when selecting a pipeline to launch.
-- (*Optional*) **Labels**: Categorize the pipeline according to arbitrary criteria (such research group or reference genome version) that may help workspace participants to select the appropriate pipeline for their analysis.
+- Optional: **Description**: A summary of the pipeline or any information that may be useful to workspace participants when selecting a pipeline to launch.
+- Optional: **Labels**: Categorize the pipeline according to arbitrary criteria (such research group or reference genome version) that may help workspace participants to select the appropriate pipeline for their analysis.
- **Compute environment**: Select an existing workspace [compute environment](../../compute-envs/overview).
- **Pipeline to launch**: The URL of any public or private Git repository that contains Nextflow source code.
- **Revision number**: Platform will search all of the available tags and branches in the provided pipeline repository and render a dropdown to select the appropriate version.
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/comm-showcase.md b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/comm-showcase.md
deleted file mode 100644
index 6be0c679f..000000000
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/comm-showcase.md
+++ /dev/null
@@ -1,352 +0,0 @@
----
-title: "Explore Platform Cloud"
-description: "Seqera Platform Cloud demonstration walkthrough"
-date: "8 Jul 2024"
-tags: [platform, launch, pipelines, launchpad, showcase tutorial]
-toc_max_heading_level: 3
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-:::info
-This demo tutorial provides an introduction to Seqera Platform, including instructions to:
-- Launch, monitor, and optimize the [nf-core/rnaseq](https://github.com/nf-core/rnaseq) pipeline
-- Select pipeline input data with [Data Explorer](../../data/data-explorer) and Platform [datasets](../../data/datasets)
-- Perform interactive analysis of pipeline results with [Studios](../../studios/index)
-
-The Platform Community Showcase is a Seqera-managed demonstration workspace with all the resources needed to follow along with this tutorial. All [Seqera Cloud](https://cloud.seqera.io) users have access to this example workspace by default.
-:::
-
-The Launchpad in every Platform workspace allows users to easily create and share Nextflow pipelines that can be executed on any supported infrastructure, including all public clouds and most HPC schedulers. A Launchpad pipeline consists of a pre-configured workflow repository, [compute environment](../../compute-envs/overview), and launch parameters.
-
-The Community Showcase contains 15 preconfigured pipelines, including [nf-core/rnaseq](https://github.com/nf-core/rnaseq), a bioinformatics pipeline used to analyze RNA sequencing data.
-
-The workspace also includes three preconfigured AWS Batch compute environments to run Showcase pipelines, and various Platform datasets and public data sources (accessed via Data Explorer) to use as pipeline input.
-
-:::note
-To skip this Community Showcase demo and start running pipelines on your own infrastructure:
-1. Set up an [organization workspace](../workspace-setup).
-1. Create a workspace [compute environment](../../compute-envs/overview) for your cloud or HPC compute infrastructure.
-1. [Add pipelines](./add-pipelines) to your workspace.
-:::
-
-## Launch the nf-core/rnaseq pipeline
-
-:::note
-This guide is based on version 3.14.0 of the nf-core/rnaseq pipeline. Launch form parameters may differ in other versions.
-:::
-
-Navigate to the Launchpad in the `community/showcase` workspace and select **Launch** next to the `nf-core-rnaseq` pipeline to open the launch form.
-
- 
-
-The launch form consists of **General config**, **Run parameters**, and **Advanced options** sections to specify your run parameters before execution, and an execution summary. Use section headings or select the **Previous** and **Next** buttons at the bottom of the page to navigate between sections.
-
-
- Nextflow parameter schema
-
- The launch form lets you configure the pipeline execution. The pipeline parameters in this form are rendered from a [pipeline schema](../../pipeline-schema/overview) file in the root of the pipeline Git repository. `nextflow_schema.json` is a simple JSON-based schema describing pipeline parameters for pipeline developers to easily adapt their in-house Nextflow pipelines to be executed in Platform.
-
- :::tip
- See [Best Practices for Deploying Pipelines with the Seqera Platform](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) to learn how to build the parameter schema for any Nextflow pipeline automatically with tooling maintained by the nf-core community.
- :::
-
-
-
-### General config
-
-Most Showcase pipeline parameters are prefilled. Specify the following fields to identify your run amongst other workspace runs:
-
-- **Workflow run name**: A unique identifier for the run, pre-filled with a random name. This can be customized.
-- **Labels**: Assign new or existing labels to the run. For example, a project ID or genome version.
-
-### Run parameters
-
-There are three ways to enter **Run parameters** prior to launch:
-
-- The **Input form view** displays form fields to enter text, select attributes from dropdowns, and browse input and output locations with [Data Explorer](../../data/data-explorer).
-- The **Config view** displays a raw schema that you can edit directly. Select JSON or YAML format from the **View as** dropdown.
-- **Upload params file** allows you to upload a JSON or YAML file with run parameters.
-
-#### input
-
-Most nf-core pipelines use the `input` parameter in a standardized way to specify an input samplesheet that contains paths to input files (such as FASTQ files) and any additional metadata needed to run the pipeline. Use **Browse** to select either a file path in cloud storage via **Data Explorer**, or a pre-loaded **Dataset**:
-
-- In the **Data Explorer** tab, select the `nf-tower-data` bucket, then search for and select the `rnaseq_sample_data.csv` file.
-- In the **Datasets** tab, search for and select `rnaseq_sample_data`.
-
-
-
-:::tip
-See [Add data](./add-data) to learn how to add datasets and Data Explorer cloud buckets to your own workspaces.
-:::
-
-#### output
-
-Most nf-core pipelines use the `outdir` parameter in a standardized way to specify where the final results created by the pipeline are published. `outdir` must be unique for each pipeline run. Otherwise, your results will be overwritten.
-
-For this tutorial test run, keep the default `outdir` value (`./results`).
-
-:::tip
-For the `outdir` parameter in pipeline runs in your own workspace, select **Browse** to specify a cloud storage directory using Data Explorer, or enter a cloud storage directory path to publish pipeline results to manually.
-:::
-
-#### Pipeline-specific parameters
-
-Modify other parameters to customize the pipeline execution through the parameters form. For example, under **Read trimming options**, change the `trimmer` to select `fastp` in the dropdown menu instead of `trimgalore`.
-
-
-
-Select **Launch** to start the run and be directed to the **Runs** tab with your run in a **submitted** status at the top of the list.
-
-## View run information
-
-### Run details page
-
-As the pipeline runs, run details will populate with parameters, logs, and other important execution details:
-
-
- View run details
-
- - **Command-line**: The Nextflow command invocation used to run the pipeline. This contains details about the pipeline version (`-r 3.14.0` flag) and profile, if specified (`-profile test` flag).
- - **Parameters**: The exact set of parameters used in the execution. This is helpful for reproducing the results of a previous run.
- - **Resolved Nextflow configuration**: The full Nextflow configuration settings used for the run. This includes parameters, but also settings specific to task execution (such as memory, CPUs, and output directory).
- - **Execution Log**: A summarized Nextflow log providing information about the pipeline and the status of the run.
- - **Datasets**: Link to datasets, if any were used in the run.
- - **Reports**: View pipeline outputs directly in the Platform.
-
- 
-
-
-
-### View reports
-
-Most Nextflow pipelines generate reports or output files which are useful to inspect at the end of the pipeline execution. Reports can contain quality control (QC) metrics that are important to assess the integrity of the results.
-
-
- View run reports
-
-
- 
-
- For example, for the nf-core/rnaseq pipeline, view the [MultiQC](https://docs.seqera.io/multiqc) report generated. MultiQC is a helpful reporting tool to generate aggregate statistics and summaries from bioinformatics tools.
-
- 
-
- The paths to report files point to a location in cloud storage (in the `outdir` directory specified during launch), but you can view the contents directly and download each file without navigating to the cloud or a remote filesystem.
-
- #### Specify outputs in reports
-
- To customize and instruct Platform where to find reports generated by the pipeline, a [tower.yml](https://github.com/nf-core/rnaseq/blob/master/tower.yml) file that contains the locations of the generated reports must be included in the pipeline repository.
-
- In the nf-core/rnaseq pipeline, the MULTIQC process step generates a MultiQC report file in HTML format:
-
- ```yaml
- reports:
- multiqc_report.html:
- display: "MultiQC HTML report"
- ```
-
-
-
-:::note
-See [Reports](../../reports/overview) to configure reports for pipeline runs in your own workspace.
-:::
-
-### View general information
-
-The run details page includes general information about who executed the run and when, the Git hash and tag used, and additional details about the compute environment and Nextflow version used.
-
-
- View general run information
-
- 
-
- The **General** panel displays top-level information about a pipeline run:
-
- - Unique workflow run ID
- - Workflow run name
- - Timestamp of pipeline start
- - Pipeline version and Git commit ID
- - Nextflow session ID
- - Username of the launcher
- - Work directory path
-
-
-
-### View process and task details
-
-Scroll down the page to view:
-
-- The progress of individual pipeline **Processes**
-- **Aggregated stats** for the run (total walltime, CPU hours)
-- **Workflow metrics** (CPU efficiency, memory efficiency)
-- A **Task details** table for every task in the workflow
-
-The task details table provides further information on every step in the pipeline, including task statuses and metrics:
-
-
- View task details
-
- Select a task in the task table to open the **Task details** dialog. The dialog has three tabs: **About**, **Execution log**, and **Data Explorer**.
-
- #### About
-
- The **About** tab includes:
-
- 1. **Name**: Process name and tag
- 2. **Command**: Task script, defined in the pipeline process
- 3. **Status**: Exit code, task status, and number of attempts
- 4. **Work directory**: Directory where the task was executed
- 5. **Environment**: Environment variables that were supplied to the task
- 6. **Execution time**: Metrics for task submission, start, and completion time
- 7. **Resources requested**: Metrics for the resources requested by the task
- 8. **Resources used**: Metrics for the resources used by the task
-
- 
-
- #### Execution log
-
- The **Execution log** tab provides a real-time log of the selected task's execution. Task execution and other logs (such as stdout and stderr) are available for download from here, if still available in your compute environment.
-
-
-
-### Task work directory in Data Explorer
-
-If a task fails, a good place to begin troubleshooting is the task's work directory. Nextflow hash-addresses each task of the pipeline and creates unique directories based on these hashes.
-
-
- View task log and output files
-
- Instead of navigating through a bucket on the cloud console or filesystem, use the **Data Explorer** tab in the Task window to view the work directory.
-
- Data Explorer allows you to view the log files and output files generated for each task, directly within Platform. You can view, download, and retrieve the link for these intermediate files to simplify troubleshooting.
-
- 
-
-
-
-## Interactive analysis
-
-Interactive analysis of pipeline results is often performed in platforms like Jupyter Notebook or RStudio. Setting up the infrastructure for these platforms, including accessing pipeline data and the necessary bioinformatics packages, can be complex and time-consuming.
-
-**Studios** streamlines the process of creating interactive analysis environments for Platform users. With built-in templates, creating a data studio is as simple as adding and sharing pipelines or datasets.
-
-### Analyze RNAseq data in Studios
-
-In the **Studios** tab, you can monitor and see the details of the Studios in the Community Showcase workspace.
-
-Studios is used to perform bespoke analysis on the results of upstream workflows. For example, in the Community Showcase workspace we have run the **nf-core/rnaseq** pipeline to quantify gene expression, followed by **nf-core/differentialabundance** to derive differential expression statistics. The workspace contains a Studio with these results from cloud storage mounted into the Studio to perform further analysis. One of these outputs is an RShiny application, which can be deployed for interactive analysis.
-
-#### Connect to the RNAseq analysis Studio
-
-Select the `rnaseq_to_differentialabundance` Studio. This Studio consists of an RStudio environment that uses an existing compute environment available in the showcase workspace. The Studio also contains mounted data generated from the nf-core/rnaseq and subsequent nf-core/differentialabundance pipeline runs, directly from AWS S3.
-
-
-
-Select **Connect** to view the running RStudio environment. The `rnaseq_to_differentialabundance` Studio includes the necessary R packages for deploying an RShiny application to visualize the RNAseq data.
-
-Deploy the RShiny app in the Studio by selecting the green play button on the last chunk of the R script:
-
-
-
-:::note
-Studios allows you to specify the resources each Studio will use. When [creating your own Studios](../../studios/index) with shared compute environment resources, you must allocate sufficient resources to the compute environment to prevent Studio or pipeline run interruptions.
-:::
-
-### Explore results
-
-The RShiny app will deploy in a separate browser window, providing a data interface. Here you can view information about your sample data, perform QC or exploratory analysis, and view the results of differential expression analyses.
-
-
-
-
- Sample clustering with PCA plots
-
- In the **QC/Exploratory** tab, select the PCA (Principal Component Analysis) plot to visualize how the samples group together based on their gene expression profiles.
-
- In this example, we used RNA sequencing data from the publicly-available ENCODE project, which includes samples from four different cell lines:
-
- - **GM12878** — a lymphoblastoid cell line
- - **K562** — a chronic myelogenous leukemia cell line
- - **MCF-7** — a breast cancer cell line
- - **H1-hESC** — human embryonic stem cells
-
- What to look for in the PCA plot:
-
- - **Replicate clustering**: Ideally, biological replicates of the same cell type should cluster closely together. For example, replicates of MCF-7 (breast cancer cell line) group together. This indicates consistent gene expression profiles among biological replicates.
- - **Cell type separation**: Different cell types should form distinct clusters. For instance, GM12878, K562, MCF-7, and H1-hESC samples should each form their own separate clusters, reflecting their unique gene expression patterns.
-
- From this PCA plot, you can gain insights into the consistency and quality of your sequencing data, identify any potential issues, and understand the major sources of variation among your samples - all directly in Platform.
-
- 
-
-
-
-
- Gene expression changes with Volcano plots
-
- In the **Differential** tab, select **Volcano plots** to compare genes with significant changes in expression between two samples. For example, filter for `Type: H1 vs MCF-7` to view the differences in expression between these two cell lines.
-
- 1. **Identify upregulated and downregulated genes**: The x-axis of the volcano plot represents the log2 fold change in gene expression between the H1 and MCF-7 samples, while the y-axis represents the statistical significance of the changes.
-
- - **Upregulated genes in MCF-7**: Genes on the left side of the plot (negative fold change) are upregulated in the MCF-7 samples compared to H1. For example, the SHH gene, which is known to be upregulated in cancer cell lines, prominently appears here.
-
- 2. **Filtering for specific genes**: If you are interested in specific genes, use the filter function. For example, filter for the SHH gene in the table below the plot. This allows you to quickly locate and examine this gene in more detail.
-
- 3. **Gene expression bar plot**: After filtering for the SHH gene, select it to navigate to a gene expression bar plot. This plot will show you the expression levels of SHH across all samples, allowing you to see in which samples it is most highly expressed.
-
- - Here, SHH is most highly expressed in MCF-7, which aligns with its known role in cancer cell proliferation.
-
- Using the volcano plot, you can effectively identify and explore the genes with the most significant changes in expression between your samples, providing a deeper understanding of the molecular differences.
-
- 
-
-
-
-### Collaborate in the Studio
-
-To share the results of your RNAseq analysis or allow colleagues to perform exploratory analysis, share a link to the Studio by selecting the options menu for the Studio you want to share, then select **Copy Studio URL**. With this link, other authenticated users with the **Connect** [role](../../orgs-and-teams/roles) (or greater) can access the session directly.
-
-:::note
-See [Studios](../../studios/index) to learn how to create Studios in your own workspace.
-:::
-
-## Pipeline optimization
-
-Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
-
-However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
-
-Pipeline optimization analyzes resource usage data from previous runs to optimize the resource allocation for future runs. After a successful run, optimization becomes available, indicated by the lightbulb icon next to the pipeline turning black.
-
-
- Optimize nf-core/rnaseq
-
- Navigate back to the Launchpad and select the lightbulb icon next to the nf-core/rnaseq pipeline to view the optimized profile. You have the flexibility to tailor the optimization's target settings and incorporate a retry strategy as needed.
-
- #### View optimized configuration
-
- When you select the lightbulb, you can access an optimized configuration profile in the second tab of the **Customize optimization profile** window.
-
- This profile consists of Nextflow configuration settings for each process and each resource directive (where applicable): **cpus**, **memory**, and **time**. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.
-
- Once optimization is selected, subsequent runs of that pipeline will inherit the optimized configuration profile, indicated by the black lightbulb icon with a checkmark.
-
- :::note
- Optimization profiles are generated from one run at a time, defaulting to the most recent run, and _not_ an aggregation of previous runs.
- :::
-
- 
-
- Verify the optimized configuration of a given run by inspecting the resource usage plots for that run and these fields in the run's task table:
-
- | Description | Key |
- | ------------ | ---------------------- |
- | CPU usage | `pcpu` |
- | Memory usage | `peakRss` |
- | Runtime | `start` and `complete` |
-
-
-
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.mdx b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.md
similarity index 95%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.mdx
rename to platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.md
index 55148ef18..0808a6078 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/monitor-runs.mdx
+++ b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/monitor-runs.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, monitoring]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
There are several ways to monitor pipeline runs in Seqera Platform:
### Workspace view
diff --git a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.mdx b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.md
similarity index 97%
rename from platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.mdx
rename to platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.md
index cd67be592..eaed535cd 100644
--- a/platform-enterprise_versioned_docs/version-24.2/getting-started/quickstart-demo/pipeline-optimization.mdx
+++ b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/pipeline-optimization.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipeline optimization]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
Seqera Platform's task-level resource usage metrics allow you to determine the resources requested for a task and what was actually used. This information helps you fine-tune your configuration more accurately.
However, manually adjusting resources for every task in your pipeline is impractical. Instead, you can leverage the pipeline optimization feature available on the Launchpad.
diff --git a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.mdx b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.mdx
rename to platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.md
index 9c79f6eb5..56702c033 100644
--- a/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.mdx
+++ b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/studios.md
@@ -2,12 +2,9 @@
title: "Studios"
description: "An introduction to Studios in Seqera Platform"
date: "8 Jul 2024"
-tags: [platform, data, studios]
+tags: [platform, studios]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
:::info
This guide provides an introduction to Studios using a demo Studio in the Community Showcase workspace. See [Studios](../../studios/overview) to learn how to create Studios in your own workspace.
:::
diff --git a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.mdx b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.md
similarity index 98%
rename from platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.mdx
rename to platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.md
index ca4a15bba..a57d2f4e6 100644
--- a/platform-enterprise_versioned_docs/version-24.1/getting-started/quickstart-demo/view-run-information.mdx
+++ b/platform-enterprise_versioned_docs/version-25.1/getting-started/quickstart-demo/view-run-information.md
@@ -5,9 +5,6 @@ date: "8 Jul 2024"
tags: [platform, runs, pipelines, monitoring, showcase tutorial]
---
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
When you launch a pipeline, you are directed to the **Runs** tab which contains all executed workflows, with your submitted run at the top of the list.
Each new or resumed run is given a random name, which can be customized prior to launch. Each row corresponds to a specific run. As a job executes, it can transition through the following states:
diff --git a/platform-enterprise_versioned_sidebars/version-23.1-sidebars.json b/platform-enterprise_versioned_sidebars/version-23.1-sidebars.json
index 9871bb355..1621d61e1 100644
--- a/platform-enterprise_versioned_sidebars/version-23.1-sidebars.json
+++ b/platform-enterprise_versioned_sidebars/version-23.1-sidebars.json
@@ -6,7 +6,6 @@
"label": "Getting started",
"collapsed": true,
"items": [
- "getting-started/community-showcase",
"getting-started/definitions",
"getting-started/deployment-options",
"getting-started/workspace"
diff --git a/platform-enterprise_versioned_sidebars/version-23.2-sidebars.json b/platform-enterprise_versioned_sidebars/version-23.2-sidebars.json
index 9ad44370a..f91819519 100644
--- a/platform-enterprise_versioned_sidebars/version-23.2-sidebars.json
+++ b/platform-enterprise_versioned_sidebars/version-23.2-sidebars.json
@@ -6,7 +6,6 @@
"label": "Getting started",
"collapsed": true,
"items": [
- "getting-started/community-showcase",
"getting-started/definitions",
"getting-started/deployment-options",
"getting-started/workspace"
diff --git a/platform-enterprise_versioned_sidebars/version-23.3-sidebars.json b/platform-enterprise_versioned_sidebars/version-23.3-sidebars.json
index 9c329d43e..4541a27cc 100644
--- a/platform-enterprise_versioned_sidebars/version-23.3-sidebars.json
+++ b/platform-enterprise_versioned_sidebars/version-23.3-sidebars.json
@@ -66,8 +66,7 @@
"items": [
"getting-started/overview",
"getting-started/definitions",
- "getting-started/deployment-options",
- "getting-started/community-showcase"
+ "getting-started/deployment-options"
]
},
{