diff --git a/.github/workflows/production.yml b/.github/workflows/production.yml index 15374daf09..69981fd264 100644 --- a/.github/workflows/production.yml +++ b/.github/workflows/production.yml @@ -22,3 +22,4 @@ jobs: secrets: AWS_OIDC_ROLE: ${{ secrets.AWS_OIDC_ROLE_PRODUCTION }} HUGO_LLM_API: ${{ secrets.HUGO_LLM_API }} + HUGO_RAG_API: ${{ secrets.HUGO_RAG_API }} diff --git a/assets/events.csv b/assets/events.csv index 2795ad268b..eab519fcd4 100644 --- a/assets/events.csv +++ b/assets/events.csv @@ -10,6 +10,6 @@ SUSECon,2,2025-03-10,2025-03-14,Orlando,United States,FALSE,Example description. Embedded World,1,2025-03-11,2025-03-13,Nuremburg,Germany,FALSE,"Embedded World offers insight into the world of embedded systems, from components and modules to operating systems, hardware and software design, M2M communication, and more.",https://www.embedded-world.de/en,Embedded and Microcontrollers; Automotive FOSSAsia,2,2025-03-13,2025-03-15,Bangkok,Thailand,TRUE,Example description. Example description. Example description. Example description. Example description.,https://events.fossasia.org/,Servers and Cloud Computing; AI; IoT NVIDIA GTC,1,2025-03-17,2025-03-21,San Jose,United States,TRUE,"Nvidia GTC is a global artificial intelligence conference for developers that brings together developers, engineers, researchers, inventors, and IT professionals. ",https://www.nvidia.com/gtc/,ML -GDC,1,2025-03-17,2025-03-21,San Fransisco,United States,FALSE,"The Game Developers Conference (GDC) is the worldÕs premier event for developers who make the games we love. GDC is the destination for creativity, innovation, and excellence.",https://gdconf.com/,"Mobile, Graphics, and Gaming" +GDC,1,2025-03-17,2025-03-21,San Fransisco,United States,FALSE,"The Game Developers Conference (GDC) is the world's premier event for developers who make the games we love. GDC is the destination for creativity, innovation, and excellence.",https://gdconf.com/,"Mobile, Graphics, and Gaming" ATO AI,2,2025-03-17,2025-03-18,Durham,United States,,Example description. Example description. Example description. Example description. Example description.,https://allthingsopen.ai/,AI KubeCon EU,2,2025-04-01,2025-04-04,London ,United Kingdom,TRUE,Example description. Example description. Example description. Example description. Example description.,https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/,Servers and Cloud Computing \ No newline at end of file diff --git a/content/learning-paths/cross-platform/daytona/_review.md b/content/learning-paths/cross-platform/daytona/_review.md deleted file mode 100644 index 706d2093e5..0000000000 --- a/content/learning-paths/cross-platform/daytona/_review.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -# ================================================================================ -# Edit -# ================================================================================ - -# Always 3 questions. Should try to test the reader's knowledge, and reinforce the key points you want them to remember. - # question: A one sentence question - # answers: The correct answers (from 2-4 answer options only). Should be surrounded by quotes. - # correct_answer: An integer indicating what answer is correct (index starts from 0) - # explanation: A short (1-3 sentence) explanation of why the correct answer is correct. Can add additional context if desired - - -review: - - questions: - question: > - Can Daytona be used to manage local dev container based environments? - answers: - - "Yes." - - "No." - correct_answer: 1 - explanation: > - Daytona can manage local development environments. - - - questions: - question: > - Which abstraction manages the details of the connection to your source code projects? - answers: - - "Provider." - - "Target." - - "Git provider." - correct_answer: 3 - explanation: > - Git providers connect your source code to your workspaces. - - - questions: - question: > - Can Daytona manage remote development environments in machines provided by cloud service providers? - answers: - - "Yes." - - "No." - correct_answer: 1 - explanation: > - Daytona can manage development environments in AWS, Azure, and GCP. All offer Arm-based virtual machines. - - -# ================================================================================ -# FIXED, DO NOT MODIFY -# ================================================================================ -title: "Review" # Always the same title -weight: 20 # Set to always be larger than the content in this path -layout: "learningpathall" # All files under learning paths have this same wrapper ---- diff --git a/content/learning-paths/cross-platform/github-arm-runners/_images/lscpu.png b/content/learning-paths/cross-platform/github-arm-runners/_images/lscpu.png new file mode 100644 index 0000000000..e3a975d21d Binary files /dev/null and b/content/learning-paths/cross-platform/github-arm-runners/_images/lscpu.png differ diff --git a/content/learning-paths/cross-platform/github-arm-runners/_index.md b/content/learning-paths/cross-platform/github-arm-runners/_index.md index f189c73690..9439e3597c 100644 --- a/content/learning-paths/cross-platform/github-arm-runners/_index.md +++ b/content/learning-paths/cross-platform/github-arm-runners/_index.md @@ -10,14 +10,14 @@ learning_objectives: - Use GitHub Actions to automate image builds. prerequisites: - - A GitHub account with a Team or Enterprise Cloud plan. + - A GitHub account (a Team or Enterprise Cloud plan is required for private repositories). - A Docker Hub account. author_primary: Jason Andrews ### Tags skilllevels: Introductory -subjects: Containers and Virtualization +subjects: CI-CD armips: - Neoverse operatingsystems: diff --git a/content/learning-paths/cross-platform/github-arm-runners/_next-steps.md b/content/learning-paths/cross-platform/github-arm-runners/_next-steps.md index 4be5bcd5f0..cf13a0ab64 100644 --- a/content/learning-paths/cross-platform/github-arm-runners/_next-steps.md +++ b/content/learning-paths/cross-platform/github-arm-runners/_next-steps.md @@ -17,14 +17,14 @@ recommended_path: "/learning-paths/cross-platform/docker-build-cloud/" # General online references (type: website) further_reading: + - resource: + title: Linux arm64 hosted runners now available for free in public repositories + link: https://github.blog/changelog/2025-01-16-linux-arm64-hosted-runners-now-available-for-free-in-public-repositories-public-preview/ + type: documentation - resource: title: Using GitHub-hosted runners link: https://docs.github.com/en/actions/using-github-hosted-runners type: documentation - - resource: - title: Supercharge your CI/CD with Arm Runners in GitHub Actions - link: https://www.youtube.com/watch?v=vrr_OgMk458 - type: video - resource: title: Arm64 on GitHub Actions Powering faster, more efficient build systems link: https://github.blog/2024-06-03-arm64-on-github-actions-powering-faster-more-efficient-build-systems/ diff --git a/content/learning-paths/cross-platform/github-arm-runners/actions.md b/content/learning-paths/cross-platform/github-arm-runners/actions.md index af8529a6b0..0abf48f210 100644 --- a/content/learning-paths/cross-platform/github-arm-runners/actions.md +++ b/content/learning-paths/cross-platform/github-arm-runners/actions.md @@ -1,14 +1,14 @@ --- title: "Run GitHub Actions jobs on the Arm-hosted runner" -weight: 3 +weight: 5 layout: "learningpathall" --- ## Use GitHub Actions -You can use GitHub Actions to build multi-architecture images by using your new Arm-hosted runner alongside a standard runner by creating a workflow file. +You can use GitHub Actions to build multi-architecture images by creating a workflow file and using Arm-hosted runners. ## Create a new GitHub repository diff --git a/content/learning-paths/cross-platform/github-arm-runners/intro.md b/content/learning-paths/cross-platform/github-arm-runners/intro.md new file mode 100644 index 0000000000..a9f815b30c --- /dev/null +++ b/content/learning-paths/cross-platform/github-arm-runners/intro.md @@ -0,0 +1,27 @@ +--- +title: "Build options for multi-architecture container images" + +weight: 2 + +layout: "learningpathall" +--- + +## How can I build multi-architecture container images? + +Building multi-architecture container images for complex projects is challenging. + +There are two common ways to build multi-architecture images, and both are explained in [Learn how to use Docker](/learning-paths/cross-platform/docker/). + +### Use instruction emulation + +The first method uses instruction emulation. You can learn about this method in [Build multi-architecture images with Docker buildx](/learning-paths/cross-platform/docker/buildx/). The drawback of emulation is slow performance, especially for complex builds which involve tasks such as compiling large C++ applications. + +### Use a manifest and multiple computers + +The second method uses multiple computers, one for each architecture, and joins the images to create a multi-architecture image using Docker manifest. You can learn about this method in [Use Docker manifest to create multi-architecture images](/learning-paths/cross-platform/docker/manifest/). The drawback of the manifest method is its complexity as it requires multiple systems. + +### Arm-hosted runners + +Arm-hosted runners from GitHub provide a way to create multi-architecture images with higher performance and lower complexity compared to the two methods described above. + +When you use Arm-hosted runners, you don't need to worry about self-hosting (managing servers) or instruction emulation (slow performance). diff --git a/content/learning-paths/cross-platform/github-arm-runners/public-repos.md b/content/learning-paths/cross-platform/github-arm-runners/public-repos.md new file mode 100644 index 0000000000..af51cd5b3f --- /dev/null +++ b/content/learning-paths/cross-platform/github-arm-runners/public-repos.md @@ -0,0 +1,50 @@ +--- +title: "Arm-hosted runners for public repositories" + +weight: 3 + +layout: "learningpathall" +--- + +## What are Arm-hosted runners? + +Runners are the machines that execute jobs in a GitHub Actions workflow. An Arm-hosted runner is a runner that is managed by GitHub and uses the Arm architecture. This means that you don't need to provide a server to run Actions workflows. GitHub provides the system and runs the Action workflows for you. + +Arm-hosted runners are available for public and private repositories. + +If you have a free plan, Arm-hosted runners are available in public repositories at no cost, subject to standard usage limits. + +You can use Arm-hosted runners in private repositories with a Teams or Enterprise Cloud account (covered on the next page). + +## What kind of server hardware is used by Arm-hosted runners? + +You may have software that relies on Arm architecture features. Arm-hosted runners are powered by Cobalt 100 processors, based on the Arm Neoverse N2. The free runners have 4 vCPUs and Armv9-A features including Scalable Vector Extension 2 (SVE2). + +The output of the `lscpu` command is below. + + + +## What do I need to change in my workflow to use Arm-hosted runners? + +To use Arm-hosted Linux runners, use the `ubuntu-22.04-arm` and `ubuntu-24.04-arm` labels in your public repository workflow runs. + +For example, if you have a workflow file with: + +```console +runs-on: ubuntu-24.04 +``` + +You can replace it with the Arm-hosted runner: + +```console +runs-on: ubuntu-24.04-arm +``` + +## How can I find out more about the software installed on the Arm-hosted runners? + +You can look at the [GitHub Actions Partner Images repository](https://github.com/actions/partner-runner-images/) for information about the runner images and installed software. + +You can also use the repository to report issues or request additional software be added to the images. + + + diff --git a/content/learning-paths/cross-platform/github-arm-runners/runner.md b/content/learning-paths/cross-platform/github-arm-runners/runner.md index 63b9ce50ae..1ee965203c 100644 --- a/content/learning-paths/cross-platform/github-arm-runners/runner.md +++ b/content/learning-paths/cross-platform/github-arm-runners/runner.md @@ -1,36 +1,16 @@ --- -title: "Create a new Arm-hosted runner" +title: "Create a new Arm-hosted runner for private repositories" -weight: 2 +weight: 4 layout: "learningpathall" --- -## How can I build multi-architecture container images? +## Can I use Arm-hosted runners for private repositories? -Building multi-architecture container images for complex projects is challenging. +Yes, you can use Arm-hosted runners in private repositories. -There are two common ways to build multi-architecture images, and both are explained in [Learn how to use Docker](/learning-paths/cross-platform/docker/). - -The first method uses instruction emulation. You can learn about this method in [Build multi-architecture images with Docker buildx](/learning-paths/cross-platform/docker/buildx/). The drawback of emulation is slow performance, especially for complex builds which involve tasks such as compiling large C++ applications. - -The second method uses multiple machines, one for each architecture, and joins the images to create a multi-architecture image using Docker manifest. You can learn about this method in [Use Docker manifest to create multi-architecture images](/learning-paths/cross-platform/docker/manifest/). The drawback of the manifest method is its complexity as it requires multiple machines. - -Arm-hosted runners from GitHub provide a way to create multi-architecture images with higher performance and lower complexity compared to the two methods described above. - -When you use Arm-hosted runners, you don't need to worry about self-hosting (managing servers) or instruction emulation (slow performance). - -## What are Arm-hosted runners? - -Runners are the machines that execute the jobs in a GitHub Actions workflow. An Arm-hosted runner is a runner that is managed by GitHub and uses the Arm architecture. This means that you don't need to provide a server to run Actions workflows. GitHub provides the system and runs the Action workflows for you. - -Arm-hosted runners are available for Linux and Windows. - -This Learning Path uses Linux. - -{{% notice Note %}} You must have a Team or Enterprise Cloud plan to use Arm-hosted runners. -{{% /notice %}} Two types of GitHub-hosted runners are available; standard runners, and larger runners. Larger runners are differentiated from standard runners because users can control the amount of RAM, the number of CPUs, and configure the allocated disk space. Larger runners have additional options for a static IP address and the ability to group runners and control settings across the runner group. Currently, Arm-hosted runners are a type of larger runner. diff --git a/content/learning-paths/laptops-and-desktops/windows_armpl/_review.md b/content/learning-paths/laptops-and-desktops/windows_armpl/_review.md deleted file mode 100644 index 0c60900719..0000000000 --- a/content/learning-paths/laptops-and-desktops/windows_armpl/_review.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -review: - - questions: - question: > - Arm Performance Libraries can be used to optimize: - answers: - - Arm64 Linux applications. - - Arm64 Windows applications. - - Both Arm64 Linux and Windows applications. - correct_answer: 3 - explanation: > - Arm Performance Libraries support both Linux and Windows. - - - questions: - question: > - Arm Performance Libraries has been integrated into Microsoft Visual Studio installation. - answers: - - "Yes" - - "No" - correct_answer: 2 - explanation: > - No, you need install Arm Performance Libraries package before using it. - - questions: - question: > - Arm Performance Libraries provides which type of code optimization? - answers: - - Standard math libraries. - - Standard encryption libraries. - - Standard compression libraries. - correct_answer: 1 - explanation: > - Arm Performance Libraries provides optimized standard core math libraries for numerical applications on 64-bit Arm based processors. - - - -# ================================================================================ -# FIXED, DO NOT MODIFY -# ================================================================================ -title: "Review" # Always the same title -weight: 20 # Set to always be larger than the content in this path -layout: "learningpathall" # All files under learning paths have this same wrapper ---- diff --git a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md index 6630f24d78..9ab796c7a6 100644 --- a/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md +++ b/content/learning-paths/servers-and-cloud-computing/cca-veraison/attestation-verification.md @@ -11,12 +11,12 @@ layout: learningpathall Linaro’s verification service is implemented using components from the open source [Veraison](https://github.com/veraison) project. -The URL for reaching this experimental verifier service is http://veraison.test.linaro.org:8080. +The URL for reaching this experimental verifier service is https://veraison.test.linaro.org:8443. To check that you can reach the Linaro attestation verifier service, run the following command: ```bash -curl http://veraison.test.linaro.org:8080/.well-known/veraison/verification +curl https://veraison.test.linaro.org:8443/.well-known/veraison/verification ``` This is a simple call to query the well-known characteristics of the verification service. If it succeeds, it will return a JSON response that looks something like this: @@ -67,7 +67,7 @@ The easiest way to do this is to use the `jq` utility. You can save the public key by repeating the curl command from the previous step and use `jq` to filter the response down to just the public key part. Save it into a file called `pkey.json`: ```bash -curl -s -N http://veraison.test.linaro.org:8080/.well-known/veraison/verification | jq '."ear-verification-key"' > $HOME/pkey.json +curl -s -N https://veraison.test.linaro.org:8443/.well-known/veraison/verification | jq '."ear-verification-key"' > $HOME/pkey.json ``` You have now saved the public key of the verification service. You are now ready to submit the CCA example attestation token to the service and get an attestation result. @@ -75,7 +75,7 @@ You have now saved the public key of the verification service. You are now ready To submit the example CCA attestation token to the verification service, you will need to use the `evcli` tool once again. First, configure the correct API endpoint for the Linaro verifier service: ```bash -export API_SERVER=http://veraison.test.linaro.org:8080/challenge-response/v1/newSession +export API_SERVER=https://veraison.test.linaro.org:8443/challenge-response/v1/newSession ``` Now submit the token using the following command. The output of this command is an attestation result, which will be saved in a file called `attestation_result.jwt`: diff --git a/content/learning-paths/servers-and-cloud-computing/gh-runners/train-test.md b/content/learning-paths/servers-and-cloud-computing/gh-runners/train-test.md index 2cfdc3e401..877e940fda 100644 --- a/content/learning-paths/servers-and-cloud-computing/gh-runners/train-test.md +++ b/content/learning-paths/servers-and-cloud-computing/gh-runners/train-test.md @@ -17,7 +17,7 @@ https://github.com/Arm-Labs/gh_armrunner_mlops_gtsrb ``` Fork the repository, using the **Fork** button: - + Create a fork within a GitHub Organization or Team where you have access to Arm-hosted GitHub runners. diff --git a/content/learning-paths/servers-and-cloud-computing/gh-runners/workflows.md b/content/learning-paths/servers-and-cloud-computing/gh-runners/workflows.md index 77519b81bd..bfcd056e59 100644 --- a/content/learning-paths/servers-and-cloud-computing/gh-runners/workflows.md +++ b/content/learning-paths/servers-and-cloud-computing/gh-runners/workflows.md @@ -58,13 +58,13 @@ Navigate to the **Train Model** workflow under the `Actions` tab. Press the `Run workflow` button and run the workflow on the main branch. - + The workflow starts running. It takes about 8 minutes to complete. Click on the workflow to see the output from each step of the workflow. - + Expand on the `Run training script` step to see the training loss per epoch followed by `Finished Training`. @@ -86,7 +86,7 @@ Finished Training Confirm the model is generated and saved as an artifact in the job's overview. - + This trained model artifact is used in the next step. @@ -143,7 +143,7 @@ Complete the steps below to modify the testing workflow file: 4. Copy the 11-digit ID number from the end of the URL in your browser address bar. - + 5. Navigate back to the **Code** tab and open the file `.github/workflows/test-model.yml`. @@ -168,7 +168,7 @@ The workflow starts running. Click on the workflow to view the output from each step. - + Click on the **Run testing script** step to see the accuracy of the model and a table of the results from the PyTorch profiler. diff --git a/content/learning-paths/servers-and-cloud-computing/intro/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/intro/_next-steps.md index 83743a5289..59d8512631 100644 --- a/content/learning-paths/servers-and-cloud-computing/intro/_next-steps.md +++ b/content/learning-paths/servers-and-cloud-computing/intro/_next-steps.md @@ -16,8 +16,8 @@ recommended_path: "/learning-paths/servers-and-cloud-computing/csp/" further_reading: - resource: - title: Where to Buy (Ampere Computing) - link: https://amperecomputing.com/where-to-buy + title: Ampere Computing + link: https://amperecomputing.com/developers/ type: website # ================================================================================ diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_demo.md b/content/learning-paths/servers-and-cloud-computing/rag/_demo.md index b8f321a74e..19b3a9c2e5 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_demo.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_demo.md @@ -2,7 +2,7 @@ title: Run a llama.cpp chatbot powered by Arm Kleidi technology overview: | - This Arm learning path shows how to use a single c4a-standard-64 Google Axion instance -- powered by an Arm Neoverse CPU -- to build a simple "Token as a Service" RAG-enabled server, used below to provide a chatbot to serve a small number of concurrent users. + This Arm learning path shows how to use a single c4a-highcpu-72 Google Axion instance -- powered by an Arm Neoverse CPU -- to build a simple "Token as a Service" RAG-enabled server, used below to provide a chatbot to serve a small number of concurrent users. This architecture would be suitable for businesses looking to deploy the latest Generative AI technologies with RAG capabilities using their existing CPU compute capacity and deployment pipelines. It enables semantic search over chunked documents using FAISS vector store. The demo uses the open source llama.cpp framework, which Arm has enhanced by contributing the latest Arm Kleidi technologies. Further optimizations are achieved by using the smaller 8 billion parameter Llama 3.1 model, which has been quantized to optimize memory usage. diff --git a/content/learning-paths/servers-and-cloud-computing/rag/_index.md b/content/learning-paths/servers-and-cloud-computing/rag/_index.md index fc7c9a1377..ebfe968750 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/_index.md @@ -3,11 +3,11 @@ title: Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Arm Se minutes_to_complete: 45 -who_is_this_for: This Learning Path is for software developers, ML engineers, and those looking to deploy production-ready LLM chatbots with RAG capabilities, knowledge base integration, and performance optimization for Arm Architecture. +who_is_this_for: This Learning Path is for software developers, ML engineers, and those looking to deploy production-ready LLM chatbots with Retrieval Augmented Generation (RAG) capabilities, knowledge base integration, and performance optimization for Arm Architecture. learning_objectives: - Set up llama-cpp-python optimized for Arm servers. - - Implement RAG architecture using the FAISS vector database. + - Implement RAG architecture using the Facebook AI Similarity Search (FAISS) vector database. - Optimize model performance through 4-bit quantization. - Build a web interface for document upload and chat. - Monitor and analyze inference performance metrics. diff --git a/content/learning-paths/servers-and-cloud-computing/rag/backend.md b/content/learning-paths/servers-and-cloud-computing/rag/backend.md index e5601f06e5..de50065fc6 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/backend.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/backend.md @@ -34,7 +34,7 @@ app = Flask(__name__) CORS(app) # Configure paths -BASE_PATH = "/home/ubuntu" +BASE_PATH = "$HOME" TEMP_DIR = os.path.join(BASE_PATH, "temp") VECTOR_DIR = os.path.join(BASE_PATH, "vector") MODEL_PATH = os.path.join(BASE_PATH, "models/llama3.1-8b-instruct.Q4_0_arm.gguf") @@ -193,4 +193,4 @@ python3 backend.py ``` You should see output similar to the image below when the backend server starts successfully: - \ No newline at end of file + diff --git a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md index 8e659b8a41..fbd872adf5 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/chatbot.md @@ -13,9 +13,14 @@ Open the web application in your browser using either the local URL or the exter http://localhost:8501 or http://75.101.253.177:8501 ``` +{{% notice Note %}} + +To access the links you may need to allow inbound TCP traffic in your instance's security rules. Always review these permissions with caution as they may introduce security vulnerabilities. + +{{% /notice %}} ## Upload a PDF File and Create a New Index -Now you can upload a PDF file in the web browser by selecting the **Create New Store** option. +Now you can upload a PDF file in the web browser by selecting the **Create New Store** option. Follow these steps to create a new index: diff --git a/content/learning-paths/servers-and-cloud-computing/rag/frontend.md b/content/learning-paths/servers-and-cloud-computing/rag/frontend.md index ebcfc0d232..51cb4eb33a 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/frontend.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/frontend.md @@ -20,7 +20,7 @@ from PIL import Image from typing import Dict, Any # Configure paths and URLs -BASE_PATH = "/home/ubuntu" +BASE_PATH = "$HOME" API_URL = "http://localhost:5000" # Page config @@ -139,4 +139,4 @@ python3 -m streamlit run frontend.py ``` You should see output similar to the image below when the frontend server starts successfully: - \ No newline at end of file + diff --git a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md index 9551af2cb1..7725d7658e 100644 --- a/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md +++ b/content/learning-paths/servers-and-cloud-computing/rag/rag_llm.md @@ -14,7 +14,7 @@ This learning path demonstrates how to build and deploy a Retrieval Augmented Ge ## Overview -In this Learning Path, you learn how to build a Retrieval Augmented Generation (RAG) chatbot using llama-cpp-python, a Python binding for llama.cpp that enables efficient LLM inference on Arm CPUs. +In this Learning Path, you learn how to build a RAG chatbot using llama-cpp-python, a Python binding for llama.cpp that enables efficient LLM inference on Arm CPUs. The tutorial demonstrates how to integrate the FAISS vector database with the Llama-3.1-8B model for document retrieval, while leveraging llama-cpp-python's optimized C++ backend for high-performance inference. diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html index 19de87a45b..5c821e9ceb 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/learning-paths/next-steps.html @@ -49,45 +49,42 @@ +
Show your network what you've learned by sharing this Learning Path with others.
+Share what you've learned.