diff --git a/contributors.csv b/assets/contributors.csv similarity index 100% rename from contributors.csv rename to assets/contributors.csv diff --git a/content/install-guides/_images/sysbox.gif b/content/install-guides/_images/sysbox.gif new file mode 100644 index 0000000000..c62004c364 Binary files /dev/null and b/content/install-guides/_images/sysbox.gif differ diff --git a/content/install-guides/aws-sam-cli.md b/content/install-guides/aws-sam-cli.md new file mode 100644 index 0000000000..92a70067e0 --- /dev/null +++ b/content/install-guides/aws-sam-cli.md @@ -0,0 +1,177 @@ +--- +title: AWS SAM CLI + +author_primary: Jason Andrews +minutes_to_complete: 15 + +official_docs: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html + +additional_search_terms: +- AWS +- Lambda + +layout: installtoolsall +multi_install: false +multitool_install_part: false +test_maintenance: false +tool_install: true +weight: 1 +--- + +The Amazon Web Services (AWS) Serverless Application Model (SAM) CLI is an open-source command-line tool that you can use to build, test, and deploy serverless applications. The SAM CLI provides a Lambda-like execution environment that lets you locally build, test and debug applications defined by AWS SAM templates. It is available for a variety of operating systems and Linux distributions, and supports the Arm architecture. + +## Before you begin + +Follow the instructions below to install and try the latest version of the AWS SAM CLI for Ubuntu on Arm. + +Confirm you are using an Arm machine by running: + +```bash { target="ubuntu:latest" } +uname -m +``` + +The output should be: + +```output +aarch64 +``` + +If you see a different result, you are not using an Arm-based computer running 64-bit Linux. + +Running the AWS SAM CLI requires Docker. Refer to the [Docker](/install-guides/docker/) Install Guide for installation instructions. Confirm Docker is running before installing the SAM CLI. + +Python and Python pip are also required to run the SAM CLI example. + +To install, run the following command: + +```console +sudo apt install python-is-python3 python3-pip -y +``` + +## Download and install the AWS SAM CLI + +There are two options to install the SAM CLI, you can select your preferred method: + +* From a zip file. +* Using the Python `pip` command. + +### Download and install from zip file + +Use `wget`: + +```bash +wget https://github.com/aws/aws-sam-cli/releases/latest/download/aws-sam-cli-linux-arm64.zip +unzip aws-sam-cli-linux-arm64.zip -d sam-install +sudo ./sam-install/install +``` + +### Install the SAM CLI using Python pip + +``` +sudo apt install python3-venv -y +python -m venv .venv +source .venv/bin/activate +pip install aws-sam-cli +``` + +### Confirm that the SAM CLI has been installed + +```bash +sam --version +``` + +The version should be printed on screen: + +```output +SAM CLI, version 1.125.0 +``` + +## Example application + +You can use the AWS SAM CLI to build and deploy a simple "Hello World" serverless application that includes the line `uname -m` to check the platform it is running on, by following these steps. + +1. Create the project + +Use the code below, adjusting the runtime argument if you have a different version of Python: + +```console +sam init --runtime python3.12 --architecture arm64 --dependency-manager pip --app-template hello-world --name uname-app --no-interactive +``` + +2. Change to the new directory: + +```console +cd uname-app +``` + +3. Modify the `hello_world/app.py` file to include the command `uname -m` + +Use a text editor to replace the contents of the `hello_world/app.py` file with the code below: + +```python +import json +import os + +def lambda_handler(event, context): + """Sample pure Lambda function + + Parameters + ---------- + event: dict, required + API Gateway Lambda Proxy Input Format + + Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format + + context: object, required + Lambda Context runtime methods and attributes + + Context doc: https://docs.aws.amazon.com/lambda/latest/dg/python-context-object.html + + Returns + ------ + API Gateway Lambda Proxy Output Format: dict + + Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html + """ + + ret = os.popen('uname -m').read() + + return { + "statusCode": 200, + "body": json.dumps({ + "message": ret, + # "location": ip.text.replace("\n", "") + }), + } +``` + +4. Build the application: + +```console +sam build +``` + +5. Test the deployed application: + +```console +sam local invoke "HelloWorldFunction" -e events/event.json +``` + +The output below shows the results from the command `uname -m` and the value of `aarch64` confirms an Arm Linux computer: + +```output +Invoking app.lambda_handler (python3.12) +Local image was not found. +Removing rapid images for repo public.ecr.aws/sam/emulation-python3.12 +Building image........................................................................................................................ +Using local image: public.ecr.aws/lambda/python:3.12-rapid-arm64. + +Mounting /home/ubuntu/uname-app/.aws-sam/build/HelloWorldFunction as /var/task:ro,delegated, inside runtime container +START RequestId: 7221da4d-346d-4e2e-831e-dcde1cb47b5b Version: $LATEST +END RequestId: 513dbd6f-7fc0-4212-ae13-a9a4ce2f21f4 +REPORT RequestId: 513dbd6f-7fc0-4212-ae13-a9a4ce2f21f4 Init Duration: 0.26 ms Duration: 84.22 ms Billed Duration: 85 ms Memory Size: 128 MB Max Memory Used: 128 MB +{"statusCode": 200, "body": "{\"message\": \"aarch64\\n\"}"} +``` + +You are ready to use the AWS SAM CLI to build more complex functions and deploy them into AWS. Make sure to select `arm64` as the architecture for your Lambda functions. + diff --git a/content/install-guides/java.md b/content/install-guides/java.md index 343a870d38..d989227bbb 100644 --- a/content/install-guides/java.md +++ b/content/install-guides/java.md @@ -32,7 +32,13 @@ Below are some of the common methods to install Java. This includes both the Jav Pick the one that works best for you. -## Install Java using the Linux package manager +{{% notice Note %}} +The Java Technology Compatibility Kit (TCK) is a test suite that verifies whether a Java implementation conforms to the Java SE Platform Specification. It's a crucial tool for ensuring that Java applications can run consistently across different platforms and implementations. + +Check the [OCTLA Signatories List](https://openjdk.org/groups/conformance/JckAccess/jck-access.html) to see who has been granted access to the TCK. +{{% /notice %}} + +## Can I install Java using the Linux package manager? For distributions using `apt` - including Debian and Ubuntu: @@ -55,7 +61,15 @@ sudo pacman -S jdk-openjdk sudo pacman -S jre-openjdk ``` -## Install Java using Amazon Corretto +## Can I install Java using Snap? + +For Linux distributions with `snap` you can install Java using: + +```console +sudo snap install openjdk +``` + +## How do I install Amazon Corretto? Amazon Corretto is a no-cost distribution of the Open Java Development Kit (OpenJDK). It is maintained and supported by Amazon Web Services (AWS). @@ -69,15 +83,64 @@ sudo apt-get update; sudo apt-get install -y java-21-amazon-corretto-jdk More installation options for Corretto are available in the [Amazon Corretto 21 Guide for Linux](https://docs.aws.amazon.com/corretto/latest/corretto-21-ug/linux-info.html) -## Install Java using Snap +## How do I install the Microsoft Build of OpenJDK? -For Linux distributions with `snap` you can install Java using: +The Microsoft Build of OpenJDK is a no-cost, open source distribution of OpenJDK. It includes Long-Term Support (LTS) binaries for Java 11 and Java 17 and runs on Arm Linux. + +{{% notice Note %}} +The Arm architecture is not available in the repositories for the `apt` package manager. +{{% /notice %}} + +You can download a tar.gz file from [Download the Microsoft Build of OpenJDK](https://learn.microsoft.com/en-gb/java/openjdk/download) + +For example: ```console -sudo snap install openjdk +wget https://aka.ms/download-jdk/microsoft-jdk-21.0.4-linux-aarch64.tar.gz +``` + +Extract the contents of the file: + +```console +tar xvf microsoft-jdk-21.0.4-linux-aarch64.tar.gz ``` -## Is there a way to install Java from the official website? +Move the contents to a directory of your choice: + +```console +sudo mv jdk-21.0.4+7/ /usr/local +``` + +Set up environment variables to locate your installation: + +```console +export JAVA_HOME=/usr/local/jdk-21.0.4+7 +export PATH=$JAVA_HOME/bin:$PATH +``` + +Add the environment variables to your `~/.bashrc` file to set them permanently. + +For more information about the available versions and supported platforms refer to [About the Microsoft Build of OpenJDK](https://learn.microsoft.com/en-gb/java/openjdk/overview). + +## How do I install Eclipse Temurin from the Adoptium Working Group? + +The Adoptium Working Group promotes and supports high-quality, TCK certified runtimes and associated technology for use across the Java ecosystem. + +Eclipse Temurin is the name of the OpenJDK distribution from Adoptium. + +To install Temurin on Ubuntu run: + +```console +sudo apt install -y wget apt-transport-https gpg +wget -qO - https://packages.adoptium.net/artifactory/api/gpg/key/public | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/adoptium.gpg > /dev/null +echo "deb https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list +sudo apt update +sudo apt install temurin-17-jdk -y +``` + +For more information about the available versions and supported platforms refer to [Temurin documentation](https://adoptium.net/docs/). + +## How do I install Java from Oracle? You can download Java from the [Oracle website](https://www.oracle.com/java/technologies/javase-downloads.html) and install it manually. Look for the files with ARM64 in the description. @@ -159,7 +222,7 @@ javac 21.0.4 ## Which version of Java should I use for Arm Linux systems? -It’s important to ensure that your version of Java is at least 11.0.9. There are large performance improvements starting from version 11.0.9. Since then, Java performance has steadily increased over time and newer versions will provide better performance. +For performance and security, it’s important to ensure that your version of Java is at least 11.0.12. Earlier versions lack significant performance improvements. Java performance has steadily increased over time and newer versions will provide better performance. ## Which flags are available for tuning the JVM? diff --git a/content/install-guides/sysbox.md b/content/install-guides/sysbox.md new file mode 100644 index 0000000000..90fced825e --- /dev/null +++ b/content/install-guides/sysbox.md @@ -0,0 +1,202 @@ +--- +### Title the install tools article with the name of the tool to be installed +### Include vendor name where appropriate +title: Sysbox + +### Optional additional search terms (one per line) to assist in finding the article +additional_search_terms: +- cloud +- vm +- virtual machine +- linux +- containers +- container +- docker + +### Estimated completion time in minutes (please use integer multiple of 5) +minutes_to_complete: 30 + +author_primary: Jason Andrews + +### Link to official documentation +official_docs: https://github.com/nestybox/sysbox/blob/master/docs/user-guide/README.md + +### PAGE SETUP +weight: 1 # Defines page ordering. Must be 1 for first (or only) page. +tool_install: true # Set to true to be listed in main selection page, else false +multi_install: false # Set to true if first page of multi-page article, else false +multitool_install_part: false # Set to true if a sub-page of a multi-page article, else false +layout: installtoolsall # DO NOT MODIFY. Always true for tool install articles +--- + +[Sysbox](https://github.com/nestybox/sysbox/blob/master/README.md) enables you to use Docker containers for workloads that typically require virtual machines. Containers run with Sysbox are able to run software that relies on the [systemd System and Service Manager](https://systemd.io/) that is not usually present in containers, and it does this without the need for a full virtual machine and hardware emulation. + +Running Docker inside Docker, and Kubernetes inside Docker, are also Sysbox use cases. Without Sysbox, these are difficult because the Docker daemon requires systemd. + +In summary, Sysbox is a powerful container runtime that provides many of the benefits of virtual machines without the overhead of running a full VM. It is good for workloads that require the ability to run system-level software. + +## What do I need to run Sysbox? + +Sysbox runs on Linux and supports Arm. + +Sysbox has limited suppot for older versions of Linux, but recent Linux versions are easily compatible. + +If you are unsure about your Linux distribution and Linux kernel version, you can check [Sysbox Distro Compatibility](https://github.com/nestybox/sysbox/blob/master/docs/distro-compat.md) + +Sysbox is a container runtime, and so Docker is required before installing Sysbox. + +In most cases, you can install Docker on Arm Linux with the commands: + +```bash +curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh +sudo usermod -aG docker $USER ; newgrp docker +``` + +Refer to the [Docker install guide](/install-guides/docker/docker-engine/) for more information. + +You can use Sysbox on a virtual machine from a [cloud service provider](/learning-paths/servers-and-cloud-computing/intro/find-hardware/), a Raspberry Pi 5, or any other Arm Linux-based computer. + +## How do I install Sysbox? + +Download the Sysbox official package from [Sysbox Releases](https://github.com/nestybox/sysbox/releases/) + +You can download the Debian package for Arm from the command line: + +```bash +wget https://downloads.nestybox.com/sysbox/releases/v0.6.4/sysbox-ce_0.6.4-0.linux_arm64.deb +``` + +Install the package using the `apt` command: + +```bash +sudo apt-get install ./sysbox-ce_0.6.4-0.linux_arm64.deb -y +``` + +If you are not using a Debian-based Linux distribution, you can use instructions to build Sysbox from the source code. Refer to [Sysbox Developer's Guide: Building & Installing](https://github.com/nestybox/sysbox/blob/master/docs/developers-guide/build.md) for further information. + +Run `systemctl` to confirm if Sysbox is running: + +```bash +systemctl list-units -t service --all | grep sysbox +``` + +If Sysbox is running, you see the output: + +```output + sysbox-fs.service loaded active running sysbox-fs (part of the Sysbox container runtime) + sysbox-mgr.service loaded active running sysbox-mgr (part of the Sysbox container runtime) + sysbox.service loaded active running Sysbox container runtime +``` + +## How can I get set up with Sysbox quickly? + +You can try Sysbox by creating a container image that includes systemd and Docker. + +Use a text editor to copy the text below to a file named `Dockerfile`: + +```console +FROM ubuntu:24.04 + +RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections + +RUN apt-get update && \ + apt-get -y install sudo curl net-tools openssh-server + +ENV USER=ubuntu + +RUN echo "$USER:ubuntu" | chpasswd && adduser $USER sudo +RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers + +# Install Docker +RUN curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh +RUN sudo usermod -aG docker $USER + +EXPOSE 22 + +ENTRYPOINT [ "/sbin/init", "--log-level=err" ] +``` + +Notice that Docker and the SSH server are installed, and port 22 is open for SSH connections. + +Build a container image using `docker`: + +```bash +docker build -t sysbox-test -f Dockerfile . +``` + +Use Sysbox as the container runtime to create a new container: + +```bash +docker run --runtime=sysbox-runc -it -P --hostname=sbox sysbox-test +``` + +The animated output below shows the Linux init process running. You can log in with the password `ubuntu`, or change it in the Dockerfile above. + +You can use Docker inside the container and the SSH server operates as expected. Both are possible because systemd is running in the container. + +![Connect #center](/install-guides/_images/sysbox.gif) + +## How can I use SSH to connect to a Sysbox container? + +To connect using SSH, you can identify the IP address of your Sysbox container in two alternative ways, from inside the container, or from outside the container. + +To find the IP address from inside the container use the `ifconfig` command: + +```console +ifconfig +``` + +The output is similar to: + +```output +eth0: flags=4163 mtu 1500 + inet 172.20.0.2 netmask 255.255.0.0 broadcast 172.20.255.255 + ether 02:42:ac:14:00:02 txqueuelen 0 (Ethernet) + RX packets 126 bytes 215723 (215.7 KB) + RX errors 0 dropped 0 overruns 0 frame 0 + TX packets 115 bytes 7751 (7.7 KB) + TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 +``` + +The `inet` IP address for `eth0` is the one you can use to SSH from outside the Sysbox container. + +For this example, the SSH command is below. Modify the IP address for your container. + +```console +ssh ubuntu@172.20.0.2 +``` + +Log in using the same `ubuntu` username and password. + +You can also use the `docker` command to identify the IP address and port from outside the container. + +Run the command below from another shell outside of the Sysbox container: + +```console +docker ps +``` + +The output is similar to: + +```output +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +3a42487cddc0 sysbox-test "/sbin/init --log-le…" 10 minutes ago Up 10 minutes 0.0.0.0:32768->22/tcp, [::]:32768->22/tcp determined_hopper +``` + +Look in the `PORTS` column for the port number that is connected to port 22 of the container, in this example it is 32768. You can use `localhost`, `0.0.0.0` or the actual IP of your machine with the identified port. + +SSH to the container using the connected port: + +```console +ssh ubuntu@localhost -p 32768 +``` + +Log in using the same `ubuntu` username and password. + +You can exit the Sysbox container using: + +```console +sudo halt +``` + +Sysbox behaves like a virtual machine and you can use it to run applications that require system services normally not available in containers. It is useful for testing and development tasks because the container changes are not saved, meaning that you can create a clean testing environment simply by restarting the Sysbox container. diff --git a/content/install-guides/wperf.md b/content/install-guides/wperf.md index 3eb598b0e5..7bba607a4f 100644 --- a/content/install-guides/wperf.md +++ b/content/install-guides/wperf.md @@ -18,7 +18,7 @@ additional_search_terms: minutes_to_complete: 15 ### Link to official documentation -official_docs: https://gitlab.com/Linaro/WindowsPerf/windowsperf/-/blob/main/wperf/README.md +official_docs: https://github.com/arm-developer-tools/windowsperf/blob/main/INSTALL.md author_primary: Jason Andrews @@ -32,7 +32,7 @@ layout: installtoolsall # DO NOT MODIFY. Always true for tool install ar WindowsPerf is an open-source command line tool for performance analysis on Windows on Arm devices. -WindowsPerf consists of a kernel-mode driver and a user-space command-line tool. The command-line tool is modeled after the Linux `perf` command. +WindowsPerf consists of a kernel-mode driver and a user-space command-line tool, or [VS Code Extension](#vscode). The command-line tool is modeled after the Linux `perf` command. WindowsPerf includes a **counting model** for counting events such as cycles, instructions, and cache events and a **sampling model** to understand how frequently events occur. @@ -40,6 +40,8 @@ WindowsPerf includes a **counting model** for counting events such as cycles, in WindowsPerf cannot be used on virtual machines, such as cloud instances. {{% /notice %}} +You can interact with the + ## Visual Studio and the Windows Driver Kit (WDK) WindowsPerf relies on `dll` files installed with Visual Studio (Community Edition or higher) and (optionally) installers from the Windows Driver Kit extension. @@ -57,20 +59,37 @@ https://gitlab.com/Linaro/WindowsPerf/windowsperf/-/releases To download directly from command prompt, use: ```console -mkdir windowsperf-bin-3.2.1 -cd windowsperf-bin-3.2.1 -curl https://gitlab.com/api/v4/projects/40381146/packages/generic/windowsperf/3.2.1/windowsperf-bin-3.2.1.zip --output windowsperf-bin-3.2.1.zip +mkdir windowsperf-bin-3.8.0 +cd windowsperf-bin-3.8.0 +curl https://gitlab.com/api/v4/projects/40381146/packages/generic/windowsperf/3.8.0/windowsperf-bin-3.8.0.zip --output windowsperf-bin-3.8.0.zip ``` Unzip the package: ```console -tar -xmf windowsperf-bin-3.2.1.zip +tar -xmf windowsperf-bin-3.8.0.zip ``` +## Install VS Code Extension (optional) {#vscode} + +In addition to the command-line tools, `WindowsPerf` is available on the [VS Code Marketplace](https://marketplace.visualstudio.com/items?itemName=Arm.windowsperf). + +Install by opening the `Extensions` view (`Ctrl`+`Shift`+`X`) and searching for `WindowsPerf`. Click `Install`. + +Open `Settings` (`Ctrl`+`,`) > `Extensions` > `WindowsPerf`, and specify the path to the `wperf` executable. + +{{% notice Non-Windows on Arm host%}} +You can only generate reports from a Windows on Arm device. + +If using a non-Windows on Arm host, you can import and analyze `WindowsPerf` JSON reports from such devices. + +You do not need to install `wperf` on non-Windows on Arm devices. +{{% /notice %}} + + ## Install wperf driver -You can install the kernel driver using either the Visual Studio [devcon](#devcon) utility or the supplied [installer](#devgen). +You can install the kernel driver using either the Visual Studio [devcon](#devcon_install) utility or the supplied [installer](#devgen_install). {{% notice Note%}} You must install the driver as `Administrator`. @@ -80,10 +99,10 @@ Open a `Windows Command Prompt` terminal with `Run as administrator` enabled. Navigate to the `windowsperf-bin-` directory. ```command -cd windowsperf-bin-3.2.1 +cd windowsperf-bin-3.8.0 ``` -### Install with devcon {#devcon} +### Install with devcon {#devcon_install} Navigate into the `wperf-driver` folder, and use `devcon` to install the driver: @@ -99,12 +118,8 @@ Updating drivers for Root\WPERFDRIVER from \wperf-driver.inf. Drivers installed successfully. ``` -### Install with wperf-devgen {#devgen} +### Install with wperf-devgen {#devgen_install} -Copy the `wperf-devgen.exe` executable to the `wperf-driver` folder. -```command -copy wperf-devgen.exe wperf-driver\ -``` Navigate to the `wperf-driver` folder and run the installer: ```command cd wperf-driver @@ -134,21 +149,22 @@ wperf --version ``` You should see output similar to: ```output -Component Version GitVer -========= ======= ====== -wperf 3.2.1 c831cfc2 -wperf-driver 3.2.1 c831cfc2 + Component Version GitVer FeatureString + ========= ======= ====== ============= + wperf 3.8.0 6d15ddfc +etw-app + wperf-driver 3.8.0 6d15ddfc +etw-drv + ``` ## Uninstall wperf driver -You can uninstall (aka "remove") the kernel driver using either the Visual Studio [devcon](#devcon) utility or the supplied [installer](#devgen). +You can uninstall (aka "remove") the kernel driver using either the Visual Studio [devcon](#devcon_uninstall) utility or the supplied [installer](#devgen_uninstall). {{% notice Note%}} You must uninstall the driver as `Administrator`. {{% /notice %}} -### Uninstall with devcon {#devcon} +### Uninstall with devcon {#devcon_uninstall} Below command removes the device from the device tree and deletes the device stack for the device. As a result of these actions, child devices are removed from the device tree and the drivers that support the device are unloaded. See [DevCon Remove](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/devcon-remove) article for more details. @@ -161,7 +177,7 @@ ROOT\SYSTEM\0001 : Removed 1 device(s) were removed. ``` -### Uninstall with wperf-devgen {#devgen} +### Uninstall with wperf-devgen {#devgen_uninstall} ```command wperf-devgen uninstall diff --git a/content/learning-paths/cross-platform/mca-godbolt/running_mca.md b/content/learning-paths/cross-platform/mca-godbolt/running_mca.md index 15b4ddf6a3..d8155a35c5 100644 --- a/content/learning-paths/cross-platform/mca-godbolt/running_mca.md +++ b/content/learning-paths/cross-platform/mca-godbolt/running_mca.md @@ -1,6 +1,6 @@ --- title: Run MCA with Arm assembly -weight: 2 +weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- @@ -393,4 +393,4 @@ You can see by looking at the timeline view that instructions no longer depend o Instructions also spend less time waiting in the scheduler's queue. This explains why the performance of `sum_test2.s` is so much better than `sum_test1.s`. -In the next section, you can try running `llvm-mca` with Compiler Explorer. \ No newline at end of file +In the next section, you can try running `llvm-mca` with Compiler Explorer. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/01.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/01.png new file mode 100644 index 0000000000..bc0c2cbffe Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/01.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/02.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/02.png new file mode 100644 index 0000000000..54eacae4e1 Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/02.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/03.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/03.png new file mode 100644 index 0000000000..2dea1eff2c Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/03.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/1.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/1.png new file mode 100644 index 0000000000..9fd24961b7 Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/1.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/2.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/2.png new file mode 100644 index 0000000000..881080cdf0 Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/2.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/3.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/3.png new file mode 100644 index 0000000000..8faba5d3dc Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/3.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/4.png b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/4.png new file mode 100644 index 0000000000..14c3ede70e Binary files /dev/null and b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/Figures/4.png differ diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md new file mode 100644 index 0000000000..23ac166cbc --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_index.md @@ -0,0 +1,45 @@ +--- +title: Create and train a PyTorch model for digit classification + +minutes_to_complete: 80 + +who_is_this_for: This is an introductory topic for software developers interested in learning how to use PyTorch to create and train a feedforward neural network for digit classification. + +learning_objectives: + - Prepare a PyTorch development environment. + - Download and prepare the MNIST dataset. + - Create a neural network architecture using PyTorch. + - Train a neural network using PyTorch. + +prerequisites: + - A computer that can run Python3 and Visual Studio Code. The OS can be Windows, Linux, or macOS. + + +author_primary: Dawid Borycki + +### Tags +skilllevels: Introductory +subjects: ML +armips: + - Cortex-A + - Cortex-X + - Neoverse +operatingsystems: + - Windows + - Linux + - macOS +tools_software_languages: + - Android Studio + - Coding +shared_path: true +shared_between: + - servers-and-cloud-computing + - laptops-and-desktops + - smartphones-and-mobile + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md new file mode 100644 index 0000000000..82cf1f985b --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_next-steps.md @@ -0,0 +1,42 @@ +--- +# ================================================================================ +# Edit +# ================================================================================ + +next_step_guidance: > + Proceed to Use Keras Core with TensorFlow, PyTorch, and JAX backends to continue exploring Machine Learning. + +# 1-3 sentence recommendation outlining how the reader can generally keep learning about these topics, and a specific explanation of why the next step is being recommended. + +recommended_path: "/learning-paths/servers-and-cloud-computing/keras-core/" + +# Link to the next learning path being recommended(For example this could be /learning-paths/servers-and-cloud-computing/mongodb). + + +# further_reading links to references related to this path. Can be: + # Manuals for a tool / software mentioned (type: documentation) + # Blog about related topics (type: blog) + # General online references (type: website) + +further_reading: + - resource: + title: PyTorch + link: https://pytorch.org + type: documentation + - resource: + title: MNIST + link: https://en.wikipedia.org/wiki/MNIST_database + type: website + - resource: + title: Visual Studio Code + link: https://code.visualstudio.com + type: website + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +weight: 21 # set to always be larger than the content in this path, and one more than 'review' +title: "Next Steps" # Always the same +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md new file mode 100644 index 0000000000..fb1980742f --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/_review.md @@ -0,0 +1,50 @@ +--- +# ================================================================================ +# Edit +# ================================================================================ + +# Always 3 questions. Should try to test the reader's knowledge, and reinforce the key points you want them to remember. + # question: A one sentence question + # answers: The correct answers (from 2-4 answer options only). Should be surrounded by quotes. + # correct_answer: An integer indicating what answer is correct (index starts from 0) + # explanation: A short (1-3 sentence) explanation of why the correct answer is correct. Can add additional context if desired + + +review: + - questions: + question: > + Does the input layer of the model flatten the 28x28 pixel image into a 1D array of 784 elements? + answers: + - "Yes" + - "No" + correct_answer: 1 + explanation: > + Yes, the model uses nn.Flatten() to reshape the 28x28 pixel image into a 1D array of 784 elements for processing by the fully connected layers. + - questions: + question: > + Will the model make random predictions if it’s run before training? + answers: + - "Yes" + - "No" + correct_answer: 1 + explanation: > + Yes, however in such the case the model will produce random outputs, as the network has not been trained to recognize any patterns from the data. + - questions: + question: > + Which loss function was used to train the PyTorch model on the MNIST dataset? + answers: + - Mean Squared Error Loss + - CrossEntropyLoss + - Hinge Loss + - Binary Cross-Entropy Loss + correct_answer: 2 + explanation: > + The CrossEntropyLoss function was used to train the model because it is suitable for multi-class classification tasks like digit classification. It measures the difference between the predicted probabilities and the true class labels, helping the model learn to make accurate predictions. + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +title: "Review" # Always the same title +weight: 20 # Set to always be larger than the content in this path +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md new file mode 100644 index 0000000000..d50b6d3c42 --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/datasets-and-training.md @@ -0,0 +1,177 @@ +--- +# User change +title: "Datasets and training" + +weight: 5 + +layout: "learningpathall" +--- + +Start by downloading the MNIST dataset. Proceed as follows: + +1. Open the pytorch-digits.ipynb you created earlier. + +2. Add the following statements: + +```python +from torchvision import transforms, datasets +from torch.utils.data import DataLoader + +# Training data +training_data = datasets.MNIST( + root="data", + train=True, + download=True, + transform=transforms.ToTensor() +) + +# Test data +test_data = datasets.MNIST( + root="data", + train=False, + download=True, + transform=transforms.ToTensor() +) + +# Dataloaders +batch_size = 32 + +train_dataloader = DataLoader(training_data, batch_size=batch_size) +test_dataloader = DataLoader(test_data, batch_size=batch_size) +``` + +The above code snippet downloads the MNIST dataset, transforms the images into tensors, and sets up data loaders for training and testing. Specifically, the `datasets.MNIST` function is used to download the MNIST dataset, with `train=True` indicating training data and `train=False` indicating test data. The `transform=transforms.ToTensor()` argument converts each image in the dataset into a PyTorch tensor, which is necessary for model training and evaluation. + +The DataLoader wraps the datasets and allows efficient loading of data in batches. It handles data shuffling, batching, and parallel loading. Here, the train_dataloader and test_dataloader are created with a batch_size of 32, meaning they will load 32 images per batch during training and testing. + +This setup prepares the training and test datasets for use in a machine learning model, enabling efficient data handling and model training in PyTorch. + +To run the above code, you will need to install certifi package: + +```console +pip install certifi +``` + +The certifi Python package provides the Mozilla root certificates, which are essential for ensuring the SSL connections are secure. If you’re using macOS, you may also need to install the certificates by running: + +```console +/Applications/Python\ 3.x/Install\ Certificates.command +``` + +Make sure to replace `x` with the number of Python version you have installed. + +After running the code you will see the output that might look like shown below: + +![image](Figures/01.png) + +# Train the model + +To train the model, specify the loss function and the optimizer: + +```Python +learning_rate = 1e-3 + +loss_fn = nn.CrossEntropyLoss() +optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) +``` + +Use CrossEntropyLoss as the loss function and the Adam optimizer for training. The learning rate is set to 1e-3. + +Next, define the methods for training and evaluating the feedforward neural network: + +```Python +def train_loop(dataloader, model, loss_fn, optimizer): + size = len(dataloader.dataset) + for batch, (x, y) in enumerate(dataloader): + # Compute prediction and loss + pred = model(x) + loss = loss_fn(pred, y) + + # Backpropagation + optimizer.zero_grad() + loss.backward() + optimizer.step() + +def test_loop(dataloader, model, loss_fn): + size = len(dataloader.dataset) + num_batches = len(dataloader) + test_loss, correct = 0, 0 + + with torch.no_grad(): + for x, y in dataloader: + pred = model(x) + test_loss += loss_fn(pred, y).item() + correct += (pred.argmax(1) == y).type(torch.float).sum().item() + + test_loss /= num_batches + correct /= size + + print(f"Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") +``` + +The first method, `train_loop`, uses the backpropagation algorithm to optimize the trainable parameters and minimize the prediction error of the neural network. The second method, `test_loop`, calculates the neural network error using the test images and displays the accuracy and loss values. + +You can now invoke these methods to train and evaluate the model using 10 epochs. + +```Python +epochs = 10 + +for t in range(epochs): + print(f"Epoch {t+1}:") + train_loop(train_dataloader, model, loss_fn, optimizer) + test_loop(test_dataloader, model, loss_fn) +``` + +After running this code, you will see the following output that shows the training progress. + +![image](Figures/02.png) + +Once the training is complete, you will see something like the following: + +```output +Epoch 10: +Accuracy: 95.4%, Avg loss: 1.507491 +``` + +which shows the model achieved around 95% of accuracy. + +# Save the model + +Once the model is trained, you can save it. There are various approaches for this. In PyTorch, you can save both the model’s structure and its weights to the same file using the `torch.save()` function. Alternatively, you can save only the weights (parameters) of the model, not the model architecture itself. This requires you to have the model’s architecture defined separately when loading. To save the model weights, you can use the following command: + +```Python +torch.save(model.state_dict(), "model_weights.pth"). +``` + +However, PyTorch does not save the definition of the class itself. When you load the model using `torch.load()`, PyTorch needs to know the class definition to recreate the model object. + +Therefore, when you later want to use the saved model for inference, you will need to provide the definition of the model class. + +Alternatively, you can use TorchScript, which serializes both the architecture and weights into a single file that can be loaded without needing the original class definition. This is particularly useful for deploying models to production or sharing models without code dependencies. + +Use TorchScript to save the model using the following commands: + +```Python +# Set model to evaluation mode +model.eval() + +# Trace the model with an example input +traced_model = torch.jit.trace(model, torch.rand(1, 1, 28, 28)) + +# Save the traced model +traced_model.save("model.pth") +``` + +The above commands set the model to evaluation mode, trace the model, and save it. Tracing is useful for converting models with static computation graphs to TorchScript, making them portable and independent of the original class definition. + +Setting the model to evaluation mode before tracing is important for several reasons: + +1. Behavior of Layers like Dropout and BatchNorm: + * Dropout. During training, dropout randomly zeroes out some of the activations to prevent overfitting. During evaluation dropout is turned off, and all activations are used. + * BatchNorm. During training, Batch Normalization layers use batch statistics to normalize the input. During evaluation, they use running averages calculated during training. + +2. Consistent Inference Behavior. By setting the model to eval mode, you ensure that the traced model will behave consistently during inference, as it will not use dropout or batch statistics that are inappropriate for inference. + +3. Correct Tracing. Tracing captures the operations performed by the model using a given input. If the model is in training mode, the traced graph may include operations related to dropout and batch normalization updates. These operations can affect the correctness and performance of the model during inference. + +In the next step, you will use the saved model for inference. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md new file mode 100644 index 0000000000..c421f037b1 --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/inference.md @@ -0,0 +1,112 @@ +--- +# User change +title: "Inference" + +weight: 6 + +layout: "learningpathall" +--- + +The inference process involves using a trained model to make predictions on new, unseen data. It typically follows these steps: + +1. **Load the Trained Model**: the model, along with its learned parameters - weights and biases - is loaded from a saved file. +2. **Prepare the Input Data**: the input data is pre-processed in the same way as during training, for example, normalization and tensor conversion, to ensure compatibility with the model. +3. **Make Predictions**: the pre-processed data is fed into the model, which computes the output based on its trained parameters. The output is often a probability distribution over possible classes. +4. **Interpret the Results**: the predicted class is usually the one with the highest probability. The results can then be used for further analysis or decision-making. + +This process allows the model to generalize its learned knowledge to make accurate predictions on new data. + +# Runing inference in PyTorch + +You can inference in PyTorch using the previously saved model. To display results, you can use matplotlib. + +Start by installing matplotlib package: + +```console +pip install matplotlib +``` + +Then, in Visual Studio Code create a new file named `pytorch-digits-inference.ipynb` and modify the file to include the code below: + +```python +import torch +from torchvision import datasets, transforms +import matplotlib.pyplot as plt +import random + +# Define a transformation to convert the image to a tensor +transform = transforms.Compose([ + transforms.ToTensor() +]) + +# Load the test set with transformation +test_data = datasets.MNIST( + root="data", + train=False, + download=True, + transform=transform +) + +# Load the entire model +model = torch.jit.load("model.pth") + +# Set the model to evaluation mode +model.eval() + +# Select 16 random indices from the test dataset +random_indices = random.sample(range(len(test_data)), 16) + +# Plot the 16 randomly selected images +fig, axes = plt.subplots(4, 4, figsize=(12, 12)) # Create a 4x4 grid of subplots + +for i, ax in enumerate(axes.flat): + # Get a random image and its label + index = random_indices[i] + image, label = test_data[index] + + # Add a batch dimension (model expects a batch of images) + image_batch = image.unsqueeze(0) + + # Run inference + with torch.no_grad(): + prediction = model(image_batch) + + # Get the predicted class + predicted_label = torch.argmax(prediction, dim=1).item() + + # Display the image with actual and predicted labels + ax.imshow(image.squeeze(), cmap="gray") + ax.set_title(f"Actual: {label}\nPredicted: {predicted_label}") + ax.axis("off") # Remove axes for clarity + +plt.tight_layout() +plt.show() +``` + +The above code performs inference on the saved PyTorch model using 16 randomly-selected images from the MNIST test dataset and displays them along with their actual and predicted labels. + +As before, start by importing the necessary Python libraries: torch, datasets, transforms, matplotlib.pyplot, and random. Torch is used for loading the model and performing tensor operations. Datasets and transforms from torchvision are used for loading and transforming the MNIST dataset. Use matplotlib.pyplot for plotting and displaying images, and random is used for selecting random images from the dataset. + +Next, load the MNIST test dataset using datasets.MNIST() with train=False to specify that it’s the test data. The dataset is automatically downloaded if it’s not available locally. + +Load the saved model using torch.jit.load("model.pth") and set the model to evaluation mode using model.eval(). This ensures that layers like dropout and batch normalization behave appropriately during inference. + +Subsequently, select 16 random images and create a 4x4 grid of subplots using plt.subplots(4, 4, figsize=(12, 12)) for displaying the images. + +Afterwards, perform inference and display the images in a loop. Specifically, for each of the 16 selected images, the image and its label are retrieved from the dataset using the random index. The image tensor is expanded to include a batch dimension (image.unsqueeze(0)) because the model expects a batch of images. Inference is performed with model(image_batch) to get the prediction. The predicted label is determined using torch.argmax() to find the index of the maximum probability in the output. Each image is displayed in its respective subplot with the actual and predicted labels. We use plt.tight_layout() to ensure that the layout is adjusted nicely, and plt.show() to display the 16 images with their actual and predicted labels. + +This code demonstrates how to use a saved PyTorch model for inference and visualization of predictions on a subset of the MNIST test dataset. + +After running the code, you should see results similar to the following figure: + +![image](Figures/03.png) + +# What you have learned + +In this exercise, you went through the complete process of training and using a PyTorch model for digit classification on the MNIST dataset. Using the training dataset, you optimized the model’s weights and biases over multiple epochs. You employed the CrossEntropyLoss function and the Adam optimizer to minimize prediction errors and improve accuracy. You periodically evaluated the model on the test dataset to monitor its performance, ensuring it was learning effectively without overfitting. + +After training, you saved the model using TorchScript, which captures both the model’s architecture and its learned parameters. This made the model portable and independent of the original class definition, simplifying deployment. + +Next, you performed inference. You loaded the saved model and set it to evaluation mode to ensure that layers like dropout and batch normalization behaved correctly during inference. You randomly selected 16 images from the MNIST test dataset to evaluate the model’s performance on unseen data. For each selected image, you used the model to predict the digit, comparing the predicted labels with the actual ones. You displayed the images alongside their actual and predicted labels in a 4x4 grid, visually assessing the model’s accuracy and performance. + +This comprehensive process, from model training and saving to inference and visualization, illustrates the end-to-end workflow for building and deploying a machine learning model in PyTorch. It demonstrates how to train a model, save it in a portable format, and then use it to make predictions on new data. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md new file mode 100644 index 0000000000..af7cffde58 --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro.md @@ -0,0 +1,124 @@ +--- +# User change +title: "Prepare a PyTorch development environment" + +weight: 2 + +layout: "learningpathall" +--- + +PyTorch is an open-source deep learning framework that is developed by Meta AI and is now part of the Linux Foundation. + +PyTorch is designed to provide a flexible and efficient platform for building and training neural networks. It is widely used due to its dynamic computational graph, which allows users to modify the architecture during runtime, making debugging and experimentation easier. + +PyTorch's objective is to provide a more flexible, user-friendly deep learning framework that addresses the limitations of static computational graphs found in earlier tools like TensorFlow. + +Prior to PyTorch, many frameworks used static computation graphs that require the entire model structure to be defined before training, making experimentation and debugging cumbersome. PyTorch introduced dynamic computational graphs, also known as “define-by-run”, that allow the graph to be constructed dynamically as operations are executed. This flexibility significantly improves ease of use for researchers and developers, enabling faster prototyping, easier debugging, and more intuitive code. + + +Additionally, PyTorch seamlessly integrates with Python, encouraging a native coding experience. Its deep integration with GPU acceleration also makes it a powerful tool for both research and production environments. This combination of flexibility, usability, and performance has contributed to PyTorch’s rapid adoption, especially in academic research, where experimentation and iteration are crucial. + +A typical process for creating a feedforward neural network in PyTorch involves defining a sequential stack of fully-connected layers, which are also known as *linear layers*. Each layer transforms the input by applying a set of weights and biases, followed by an activation function like ReLU. PyTorch supports this process using the torch.nn module, where layers are easily defined and composed. + +To create a model, users subclass the torch.nn.Module class, defining the network architecture in the __init__ method, and implement the forward pass in the forward method. PyTorch’s intuitive API and support for GPU acceleration make it ideal for building efficient feedforward networks, particularly in tasks such as image classification and digit recognition. + +In this Learning Path, you will explore how to use PyTorch for creating a model for digit recognition, before then proceeding to train it. + +## Before you begin + +Before you begin make sure Python3 is installed on your system. You can check this by running: + +```console +python3 --version +``` + +The expected output is the Python version, for example: + +```output +Python 3.11.2 +``` + +If Python3 is not installed, download and install it from [python.org](https://www.python.org/downloads/). + +Alternatively, you can also install Python3 using package managers such as Brew or APT. + +If you are using Windows on Arm you can refer to the [Python install guide](https://learn.arm.com/install-guides/py-woa/). + +Next, download and install [Visual Studio Code](https://code.visualstudio.com/download). + +## Install PyTorch and additional Python packages + +To prepare a virtual Python environment, install PyTorch, and the additional tools you will need for this Learning Path: + +1. Open a terminal or command prompt and navigate to your project directory. + +2. Create a virtual environment by running: + +```console +python -m venv pytorch-env +``` + +This will create a virtual environment named pytorch-env. + +3. Activate the virtual environment: + +* On Windows: +```console +pytorch-env\Scripts\activate +``` + +* On macOS or Linux: +```console +source pytorch-env/bin/activate +``` + +Once activated, you should see the virtual environment name in your terminal prompt. + +3. Install PyTorch using `pip`: + +```console +pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu +``` + +4. Install torchsummary, Jupyter and IPython Kernel: + +```console +pip install torchsummary +pip install jupyter +pip install ipykernel +``` + +5. Register your virtual environment as a new kernel: + +```console +python3 -m ipykernel install --user --name=pytorch-env +``` + +6. Install the Jupyter Extension in VS Code: + +* Open VS Code and go to the Extensions view (click on the Extensions icon or press Ctrl+Shift+X). + +* Search for “Jupyter” and install the official Jupyter extension. + +* Optionally, also install the Python extension if you haven’t already, as it improves Python language support in VS Code. + +To ensure everything is set up correctly: + +1. Open Visual Studio Code. +2. Click New file, and select `Jupyter Notebook .ipynb Support`. +3. Save the file as `pytorch-digits.ipynb`. +4. Select the Python kernel you created earlier (pytorch-env). To do so, click Kernels in the top right corner. Then, click Jupyter Kernel..., and you will see the Python kernel as shown below: + +![img1](Figures/1.png) + +5. In your Jupyter notebook, run the following code to verify PyTorch is working correctly: + +```console +import torch +print(torch.__version__) +``` + +It will look as follows: +![img2](Figures/2.png) + +With your development environment created, you can proceed to creating a PyTorch model. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md new file mode 100644 index 0000000000..ae6126132d --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/intro2.md @@ -0,0 +1,53 @@ +--- +# User change +title: "PyTorch model training" + +weight: 4 + +layout: "learningpathall" +--- + +In the previous section, you created a feedforward neural network for digit classification using the MNIST dataset. The network was left untrained and lacks the ability to make accurate predictions. + +To enable the network to recognize handwritten digits effectively, training is needed. Training in PyTorch involves configuring the network's parameters, such as weights and biases, by exposing the model to labeled data and iteratively adjusting these parameters to minimize prediction errors. This process allows the model to learn the patterns in the data, enabling it to make accurate classifications on new, unseen inputs. + +The typical approach to training a neural network in PyTorch involves several key steps. + +First, obtain and preprocess the dataset, which usually includes normalizing the data and converting it into a format suitable for the model. + +Next, the dataset is split into training and testing subsets. Training data is used to update the model’s parameters, while testing data evaluates its performance. During training, feed batches of input data through the network, calculate the prediction error or loss using a loss function (such as cross-entropy for classification tasks), and optimize the model’s weights and biases using backpropagation. Backpropagation involves computing the gradient of the loss with respect to each parameter and then updating the parameters using an optimizer, like Stochastic Gradient Descent (SGD) or Adam. This process is repeated for multiple epochs until the model achieves satisfactory performance, balancing accuracy and generalization. + +### Loss, gradients, epoch and backpropagation + +Loss is a measure of how well a model’s predictions match the true labels of the data. It quantifies the difference between the predicted output and the actual output. The lower the loss, the better the model’s performance. In classification tasks, a common loss function is Cross-Entropy Loss, while Mean Squared Error (MSE) is often used for regression tasks. The goal of training is to minimize the loss, which indicates that the model’s predictions are getting closer to the actual labels. + +Gradients represent the rate of change of the loss with respect to each of the model’s parameters (weights and biases). They are used to update the model’s parameters in the direction that reduces the loss. Gradients are calculated during the backpropagation step, where the loss is propagated backward through the network to compute how each parameter contributes to the overall loss. Optimizers like SGD or Adam use these gradients to adjust the parameters, effectively “teaching” the model to improve its predictions. + +An epoch refers to one complete pass through the entire training dataset. During each epoch, the model sees every data point once and updates its parameters accordingly. Multiple epochs are typically required to train a model effectively because, during each epoch, the model learns and fine-tunes its parameters based on the data it processes. The number of epochs is a hyperparameter that you set before training, and increasing it can improve the model’s performance, but too many epochs may lead to overfitting, where the model performs well on training data but poorly on new, unseen data. + +Backpropagation is a fundamental algorithm used in training neural networks to optimize their parameters—weights and biases—by minimizing the loss function. It works by propagating the error backward through the network, calculating the gradients of the loss function with respect to each parameter, and updating these parameters accordingly. + +### Training a model in PyTorch + +To train a model in PyTorch, several essential components are required: + +1. **Dataset**: the source of data that the model will learn from. It typically consists of input samples and their corresponding labels. PyTorch provides the `torchvision.datasets` module for easy access to popular datasets like MNIST, CIFAR-10, and ImageNet. You can also create custom datasets using the `torch.utils.data.Dataset` class. + +2. **DataLoader**: used to efficiently load and batch the data during training. It handles data shuffling, batching, and parallel loading, making it easier to feed the data into the model in a structured manner. This is crucial for performance, especially when working with large datasets. + +3. **Model**: the Neural Network Architecture defines the structure of the neural network. You learned that in PyTorch, models are typically created by subclassing `torch.nn.Module` and defining the network layers and forward pass. This includes specifying the input and output dimensions and the sequence of layers, such as linear layers, activation functions, and dropout. + +4. **Loss Function**: measures how far the model’s predictions are from the actual targets. It guides the optimization process by providing a signal that tells the model how to adjust its parameters. Common loss functions include Cross-Entropy Loss for classification tasks and Mean Squared Error (MSE) Loss for regression tasks. You can select a predefined loss function from torch.nn or define your own. + +5. **Optimizer**: updates the model’s parameters based on the gradients computed during backpropagation. It determines how the model learns from the data. Popular optimizers include Stochastic Gradient Descent (SGD) and Adam, which are available in the torch.optim module. You need to specify the learning rate (a hyperparameter that controls how much to change the parameters in response to the gradient) and other hyperparameters when creating the optimizer. + +6. **Training Loop**: where the actual learning happens. For each iteration of the loop: + * A batch of data is fetched from the DataLoader. + * The model performs a forward pass to generate predictions. + * The loss is calculated using the predictions and the true labels. + * The gradients are computed via backpropagation. + * The optimizer updates the model’s parameters based on the gradients. + +This process is repeated for a specified number of epochs to gradually reduce the loss and improve the model’s performance. + +In the next step you will see how to perform model training. diff --git a/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md new file mode 100644 index 0000000000..abfc9f117f --- /dev/null +++ b/content/learning-paths/cross-platform/pytorch-digit-classification-arch-training/model.md @@ -0,0 +1,142 @@ +--- +# User change +title: "Create a PyTorch model for MNIST" + +weight: 3 + +layout: "learningpathall" +--- + +You can create and train a feedforward neural network to classify handwritten digits from the MNIST dataset. This dataset contains 70,000 images, comprising 60,000 training and 10,000 testing images, of handwritten numerals (0-9), each with dimensions of 28x28 pixels. Some representative MNIST digits with their corresponding labels are shown below. + +![img3](Figures/3.png) + +The neural network begins with an input layer containing 28x28 = 784 input nodes, with each node accepting a single pixel from an MNIST image. + +You will add a linear hidden layer with 96 nodes, using the hyperbolic tangent (tanh) activation function. To prevent overfitting, a dropout layer is applied, randomly setting 20% of the nodes to zero. + +You will then include another hidden layer with 256 nodes, followed by a second dropout layer that again removes 20% of the nodes. Finally, the output layer consists of ten nodes, each representing the probability of recognizing one of the digits (0-9). + +The total number of trainable parameters for this network is calculated as follows: + +* First hidden layer: 784 x 96 + 96 = 75,360 parameters (weights + biases). +* Second hidden layer: 96 x 256 + 256 = 24,832 parameters. +* Output layer: 256 x 10 + 10 = 2,570 parameters. + +In total, the network will have 102,762 trainable parameters. + +# Implementation + +To implement the model, supplement the `pytorch-digits.ipynb` notebook with the following statements: + +```Python +from torch import nn +from torchsummary import summary + +class_names = range(10) + +class NeuralNetwork(nn.Module): + def __init__(self): + super(NeuralNetwork, self).__init__() + self.flatten = nn.Flatten() + self.linear_stack = nn.Sequential( + nn.Linear(28*28, 96), + nn.Tanh(), + nn.Dropout(.2), + + nn.Linear(96, 256), + nn.Sigmoid(), + nn.Dropout(.2), + + nn.Linear(256, len(class_names)), + nn.Softmax(dim=1) + ) + + def forward(self, x): + x = self.flatten(x) + logits = self.linear_stack(x) + return logits +``` + +To build the neural network in PyTorch, define a class that inherits from PyTorch’s nn.Module. This approach is similar to TensorFlow’s subclassing API. In this case, define a class named NeuralNetwork, which consists of two main components: + +1. **__init__** method + +This method serves as the constructor for the class. + +First initialize the nn.Module with super(NeuralNetwork, self).__init__(). Inside this method, define the architecture of the feedforward neural network. The input is first flattened from its original 28x28 pixel format into a 1D array of 784 elements using nn.Flatten(). + +Next, create a sequential stack of layers using nn.Sequential. + +The network consists of: +* A fully-connected (Linear) layer with 96 nodes, followed by the Tanh activation function. +* A Dropout layer with a 20% dropout rate to prevent overfitting. +* A second Linear layer, with 256 nodes, followed by the Sigmoid activation function. +* Another Dropout layer, that removes 20% of the nodes. +* A final Linear layer, with 10 nodes (matching the number of classes in the dataset), followed by a Softmax activation function that outputs class probabilities. + +2. **forward** method + +This method defines the forward pass of the network. It takes an input tensor x, flattens it using self.flatten, and then passes it through the defined sequential stack of layers (self.linear_stack). + +The output, called logits, represents the class probabilities for the digit prediction. + +The next step initializes the model and displays the summary using the torchsummary package: + +```Python +model = NeuralNetwork() + +summary(model, (1, 28, 28)) +``` + +After running the notebook, you will see the following output: + +![img4](Figures/4.png) + +You will see a detailed summary of the NeuralNetwork model’s architecture, including the following information: + +1. Layer Details + +The summary lists each layer of the network sequentially, including: + +* The Flatten layer, which reshapes the 28x28 input images into a 784-element vector. +* The Linear layers with 96 and 256 nodes, respectively, along with the activation functions (Tanh and Sigmoid) applied after each linear transformation. +* The Dropout layers that randomly-deactivate 20% of the neurons in the respective layers. +* The final Linear layer with 10 nodes, corresponding to the output probabilities for the 10 digit classes, followed by the Softmax function. + +2. Input and Output Shapes + +For each layer, the summary shows the shape of the input and output tensors, helping to trace how the data flows through the network. For example, the input shape starts as (1, 28, 28) for the image, which gets flattened to (1, 784) after the Flatten layer. + +3. The summary + +The summary provides the total number of trainable parameters in each layer, including both weights and biases. + +This includes: + +* 75,360 parameters for the first Linear layer (784 inputs × 96 nodes + 96 biases). +* 24,832 parameters for the second Linear layer (96 nodes × 256 nodes + 256 biases). +* 2,570 parameters for the output Linear layer (256 nodes × 10 output nodes + 10 biases). +* At the end, you will see the total number of parameters in the model, which is 102,762 trainable parameters. + +This summary provides a clear overview of the model architecture, the dimensional transformations happening at each layer, and the number of parameters that will be optimized during training. + +Running the model now will produce random outputs, as the network has not been trained to recognize any patterns from the data. The next step is to train the model using a dataset and an optimization process, such as gradient descent, so that it can learn to make accurate predictions. + +At this point, the model makes predictions, but since it hasn’t been trained, the predictions are random and unreliable. The network’s weights are initialized randomly, or use the default initialization methods, so the output probabilities from the softmax layer are essentially random. + +The output is still a probability distribution over the 10 digit classes (0-9), but the values do not correspond to the images, because the model has not learned the patterns from the MNIST dataset. + +Technically, the code will run without errors as long as you provide it with an input image of the correct dimensions, which is 28x28 pixels. The model can accept input, pass it through the layers, and return a prediction - a vector of 10 probabilities. However, the results are not useful until the model is trained. + +# What you have learned so far + +You have successfully defined and initialized a feedforward neural network using PyTorch. + +The model was designed to classify handwritten digits from the MNIST dataset, and details of the architecture were printed using the **summary()** function. + +The network consists of input flattening, two hidden layers with activation functions and dropout for regularization, and an output layer with a softmax function to predict the digit class probabilities. + +You also confirmed that the model has a total of 102,762 trainable parameters. + +The next step is to train the model using the MNIST dataset, which involves feeding the data through the network, calculating the loss, and optimizing the weights based on backpropagation to improve the model's accuracy in digit classification. diff --git a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md index 1a188ccca1..658ab2ad93 100644 --- a/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md +++ b/content/learning-paths/servers-and-cloud-computing/PMUv3_plugin_learning_path/before-you-begin.md @@ -83,7 +83,7 @@ popd Get the PMUv3 plugin source code by running: ```console -git clone https://github.com/GayathriNarayana19/PMUv3_plugin.git +git clone https://github.com/ARM-software/PMUv3_plugin.git ``` Copy the Perf libs: diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/azure-cobalt-vm.png b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/azure-cobalt-vm.png new file mode 100644 index 0000000000..ac4899c156 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/azure-cobalt-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/github-run.png b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/github-run.png new file mode 100644 index 0000000000..50380430ba Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/github-run.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/kubernetes-deployment.png b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/kubernetes-deployment.png new file mode 100644 index 0000000000..b8102b87c9 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_images/kubernetes-deployment.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_index.md b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_index.md new file mode 100644 index 0000000000..c8bae9512d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_index.md @@ -0,0 +1,41 @@ +--- +title: Deploy a .NET application on Microsoft Azure Cobalt 100 VMs + +minutes_to_complete: 60 + +who_is_this_for: This is an advanced topic for software developers who want to develop cloud-native applications using GitHub Actions and Azure Kubernetes Service (AKS), and run them on Microsoft Azure Cobalt 100 VMs. + +learning_objectives: + - Configure an Azure Cobalt 100 VM as a self-hosted GitHub runner. + - Create an AKS cluster with Arm-based Azure Cobalt 100 nodes using Terraform. + - Deploy a .NET application to AKS with GitHub Actions using the self-hosted Arm64-based runner. + +prerequisites: + - A Microsoft Azure account. + - A GitHub account. + - A machine with [Terraform](/install-guides/terraform/),[Azure CLI](/install-guides/azure-cli), and [Kubectl](/install-guides/kubectl/) installed. + +author_primary: Pranay Bakre + +### Tags +skilllevels: Advanced +subjects: Containers and Virtualization +cloud_service_providers: Microsoft Azure + +armips: + - Neoverse + +tools_software_languages: + - .NET + - Kubernetes + - Docker + +operatingsystems: + - Linux + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_next-steps.md new file mode 100644 index 0000000000..0bf30b80d4 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_next-steps.md @@ -0,0 +1,33 @@ +--- +next_step_guidance: > + .NET based applications are supported on both Linux and Windows on Arm. Continue reading the Learning Paths to discover more. + + +recommended_path: "/learning-paths/servers-and-cloud-computing/aks/" + +further_reading: + - resource: + title: Developing Cloud-native Applications with New Arm Neoverse CSS-based Microsoft Azure Cobalt 100 Virtual Machines + link: https://newsroom.arm.com/blog/microsoft-azure-cobalt-100-vm + type: blog + - resource: + title: AKS documentation + link: https://docs.microsoft.com/en-us/azure/aks/ + type: documentation + - resource: + title: Azure Developer documentation + link: https://docs.microsoft.com/en-us/azure/developer/ + type: documentation + - resource: + title: Kubernetes documentation + link: https://kubernetes.io/docs/home/ + type: documentation + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +weight: 21 # set to always be larger than the content in this path, and one more than 'review' +title: "Next Steps" # Always the same +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_review.md b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_review.md new file mode 100644 index 0000000000..9412a8cad9 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/_review.md @@ -0,0 +1,38 @@ +--- +review: + - questions: + question: > + .NET based-applications can be deployed on Azure Cobalt 100 Arm-based VMs. + answers: + - "True" + - "False" + correct_answer: 1 + explanation: > + Arm Neoverse-based Azure Cobalt 100 VMs support .NET based applications. + - questions: + question: > + What is the general-purpose VM series supported by Azure Cobalt 100 processors? + answers: + - "Dpsv6" + - "Epsv6" + correct_answer: 1 + explanation: > + General-purpose VM series Dpsv6 and Dplsv6 are based on Azure Cobalt 100 processors. Epsv6 series VMs are memory-optimized VMs based on Azure Cobalt 100 processors. + - questions: + question: > + GitHub Actions does not support Arm-based runners. + answers: + - "True" + - "False" + correct_answer: 2 + explanation: > + GitHub provides both self-hosted and managed Arm-based runners for developing cloud- native applications natively on Arm. + + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +title: "Review" # Always the same title +weight: 20 # Set to always be larger than the content in this path +layout: "learningpathall" # All files under learning paths have this same wrapper +--- diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/azure-cobalt.md b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/azure-cobalt.md new file mode 100644 index 0000000000..9ebf82fcfb --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/azure-cobalt.md @@ -0,0 +1,262 @@ +--- +title: "Build and deploy a .NET application" + +weight: 3 + +layout: "learningpathall" +--- + +In this Learning Path, you will build a .NET 8-based web application using a self-hosted GitHub Actions Arm64 runner. You will deploy the application in an Azure Kubernetes Cluster, running on Microsoft Cobalt 100-based VMs. Self-hosted runners offer increased control and flexibility in terms of infrastructure, operating systems, and tools, in comparison to GitHub-hosted runners. + +{{% notice Note %}} +* GitHub-hosted Arm64 runners have now reached General Availability. If your GitHub account is part of a Team or an Enterprise Cloud plan, you can use GitHub-hosted Arm64 runners. + +* To learn how you can configure a GitHub-managed runner, see the Learning Path [*Build multi-architecture container images with GitHub Arm-hosted runners*](/learning-paths/cross-platform/github-arm-runners/). +{{% /notice %}} + +## How do I create an Azure Virtual Machine? +Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other VM in Azure. To create an Azure virtual machine, launch the [Azure portal](https://portal.azure.com/) and navigate to Virtual Machines. + +Select `Create Azure Virtual Machine`, and fill in the details such as `Name`, and `Region`. + +In the `Size` field, click on `See all sizes` and select the `D-Series v6` family of VMs. Select `D2psv6` from the list and create the VM. + +![azure-cobalt-vm #center](_images/azure-cobalt-vm.png) + +{{% notice Note %}} +To learn more about Arm-based VMs in Azure, refer to "Getting Started with Microsoft Azure" in [*Get started with Arm-based cloud instances*](/learning-paths/servers-and-cloud-computing/csp/azure). +{{% /notice %}} + +## How do I configure the GitHub repository? + +The source code for the application and configuration files that you require to follow this Learning Path are hosted in this [msbuild-azure github repository](https://github.com/pbk8s/msbuild-azure). This repository also contains the Dockerfile and Kubernetes deployment manifests that you require to deploy the .NET 8 based application. + +Follow these steps: + +* Start by forking the repository. + +* Once the GitHub repository is forked, navigate to the `Settings` tab, and click on `Actions` in the left navigation pane. + + * In `Runners`, select `New self-hosted runner`, which opens up a new page to configure the runner. + +* For `Runner image`, select `Linux`, and for `Architecture`, select `ARM64`. + +* Using the commands shown, execute them on the `D2psv6` VM you created in the previous step. + +* Once you have configured the runner successfully, you will see a self-hosted runner appear on the same page in GitHub. + +{{% notice Note %}} +To learn more about creating an Arm-based self-hosted runner, see this Learning Path [*Use Self-Hosted Arm64-based runners in GitHub Actions for CI/CD*](/learning-paths/laptops-and-desktops/self_hosted_cicd_github/). +{{% /notice %}} + +## How do I create an AKS cluster with Arm-based Azure Cobalt 100 nodes using Terraform? + +You can create an Arm-based AKS cluster by following the steps in this Learning Path [*Create an Arm-based Kubernetes cluster on Microsoft Azure Kubernetes Service*](/learning-paths/servers-and-cloud-computing/aks/cluster_deployment/). + +Make sure to update the `main.tf` file with the correct VM as shown below: + +```console +`vm_size` = `Standard_D2ps_v6` +``` +Once you have successfully created the cluster, you can proceed to the next section. + +## How do I create a container registry with Azure Container Registry (ACR)? + +To create a container registry in Azure Container Registry to host the Docker images for your application, use the following command: + +```console +az acr create --resource-group myResourceGroup --name mycontainerregistry +``` +## How do I set up GitHub Secrets? + +The next step allows GitHub Actions to access the Azure Container Registry to push application docker images and Azure Kubernetes Service to deploy application pods. + +Create the following secrets in your GitHub repository: + +- Populate `ACR_Name` with the name of your Azure Container Registry. +- Populate `AZURE_CREDENTIALS` with Azure Credentials of a Service Principal. +- Populate `CLUSTER_NAME` with the name of your AKS cluster. +- Populate `CLUSTER_RESOURCE_GROUP_NAME` with the name of your resource group. + +Refer to this [guide](https://learn.microsoft.com/en-us/azure/developer/github/connect-from-azure-secret) for further information about signing into Azure using GitHub Actions. + +## Deploy a .NET-based application + +.NET added support for Arm64 applications starting with version 6. Several performance enhancements have been made in later versions. The latest version that supports Arm64 targets is .NET 9. In this Learning Path, you will use the .NET 8 SDK for application development. + +Follow these steps: + +* In your fork of the GitHub repository, inspect the `aks-ga-demo.csproj` file. + +* Verify that the `TargetFramework` field has `net8.0` as the value. + +The contents of the file are shown below: + +```console + + + + net8.0 + aks_ga_demo + + + +``` + +You can inspect the contents of the `Dockerfile` within your repository as well. This is a multi-stage Dockerfile with the following stages: + +1. `base` stage - prepares the base environment with the `.NET 8 SDK` and exposes ports 80 and 443. + +2. `build` stage - restores dependencies and builds the application. + +3. `publish` stage - publishes the application making it ready for deployment. + +4. `final` stage - copies the published application into the final image and sets the entry point to run the application. + +```console +FROM mcr.microsoft.com/dotnet/sdk:8.0 AS base +WORKDIR /app +EXPOSE 80 +EXPOSE 443 + +FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build +WORKDIR /src +COPY ["aks-ga-demo.csproj", "./"] +RUN dotnet restore "./aks-ga-demo.csproj" +COPY . . +WORKDIR "/src/." +RUN dotnet build "aks-ga-demo.csproj" -c Release -o /app/build + +FROM build AS publish +RUN dotnet publish "aks-ga-demo.csproj" -c Release -o /app/publish + +FROM base AS final +WORKDIR /app +COPY --from=publish /app/publish . +ENTRYPOINT ["dotnet", "aks-ga-demo.dll"] +``` + +Next, navigate to the `k8s` folder and check the Kubernetes yaml files. The `deployment.yml` file defines a deployment for the application. It specifies the container image to use from ACR and exposes port 80 for the application. The deployment ensures that the application runs with the defined resource constraints and is accessible on the specified port. + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: githubactions-aks-demo +spec: + selector: + matchLabels: + app: githubactions-aks-demo + template: + metadata: + labels: + app: githubactions-aks-demo + spec: + containers: + - name: githubactions-aks-demo + image: msbuilddemo.azurecr.io/githubactions-aks-demo + resources: + limits: + memory: "128Mi" + cpu: "500m" + ports: + - containerPort: 80 +``` + +The `service.yml` file defines a `Service` and uses `LoadBalancer` to expose the service externally on port 8080, directing traffic to the application’s container on port 80. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: githubactions-aks-demo-service +spec: + selector: + app: githubactions-aks-demo + type: LoadBalancer + ports: + - port: 8080 + targetPort: 80 +``` + +Finally, have a look at the GitHub Actions file located at `.github/workflows/deploytoAKS.yml` + +```yaml +name: Deploy .NET app + +on: + workflow_dispatch: + push: + + +jobs: + deploy: + name: Deploy application + runs-on: self-hosted + + steps: + - name: Checkout repo + uses: actions/checkout@v2 + + - name: Build image + run: docker build -t githubactions-aks-demo:'${{github.sha}}' . + + - name: Azure login + uses: azure/login@v1.4.6 + with: + creds: '${{ secrets.AZURE_CREDENTIALS }}' + + - name: ACR login + run: az acr login --name msbuilddemo + + - name: Tag and push image + run: | + docker tag githubactions-aks-demo:'${{github.sha}}' msbuilddemo.azurecr.io/githubactions-aks-demo:'${{github.sha}}' + docker push msbuilddemo.azurecr.io/githubactions-aks-demo:'${{github.sha}}' + + - name: Get AKS credentials + env: + CLUSTER_RESOURCE_GROUP_NAME: ${{ secrets.CLUSTER_RESOURCE_GROUP_NAME }} + CLUSTER_NAME: ${{ secrets.CLUSTER_NAME }} + run: | + az aks get-credentials \ + --resource-group $CLUSTER_RESOURCE_GROUP_NAME \ + --name $CLUSTER_NAME \ + --overwrite-existing + + - name: Deploy application + uses: Azure/k8s-deploy@v1 + with: + action: deploy + manifests: | + k8s/deployment.yml + k8s/service.yml + images: | + msbuilddemo.azurecr.io/githubactions-aks-demo:${{github.sha }} +``` + +This GitHub Actions yaml file defines a workflow to deploy a .NET application to Azure Kubernetes Service (AKS). This workflow runs on the self-hosted GitHub Actions runner that you configured in a previous step. This workflow can be triggered manually, or on a push to the repository. + +It has the following main steps: + +1. `Checkout repo` - checks out the repository code. +2. `Build image` - builds a Docker image of the application. +3. `Azure login` - logs in to Azure using stored credentials in GitHub Secrets. +4. `ACR login` - logs in to Azure Container Registry (ACR). +5. `Tag and push image` - tags and pushes the Docker image to Azure Container Registry. +6. `Get AKS credentials` - retrieves Azure Kubernetes Cluster credentials. +7. `Deploy application` - deploys the application to AKS using specified Kubernetes manifests. + +## How do I run the CI/CD pipeline? + +The next step is to trigger the pipeline manually by navigating to `Actions` tab in the GitHub repository. Select `Deploy .NET app`, and click on `Run Workflow`. You can also execute the pipeline by making a commit to the repository. Once the pipeline executes successfully, you will see the Actions output in a format similar to what is shown below: + +![github-run #center](_images/github-run.png) + +You can check your kubernetes cluster and see new application pods deployed on the cluster as shown below: + +![kubernetes-deployment #center](_images/kubernetes-deployment.png) + + + + diff --git a/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/background.md b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/background.md new file mode 100644 index 0000000000..1231a1dd47 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/azure-cobalt-cicd-aks/background.md @@ -0,0 +1,21 @@ +--- +title: "Background" + +weight: 2 + +layout: "learningpathall" +--- + +## What is the Azure Cobalt 100 Arm-based processor? + +Cobalt 100 is Microsoft’s first Arm-based server processor, built using the Armv9 Neoverse-N2 CPU. + +The Cobalt 100 processor is optimized for the performance of scale-out cloud-based applications. + +The Azure Cobalt 100 VM instances include two series: + +* The general-purpose `Dpsv6 and Dplsv6` virtual machine series. +* The memory-optimized `Epsv6` virtual machine series. + +To learn more about Azure Cobalt 100, refer to the blog ["Announcing the preview of new Azure VMs based on the Azure Cobalt 100 processor"](https://techcommunity.microsoft.com/t5/azure-compute-blog/announcing-the-preview-of-new-azure-vms-based-on-the-azure/ba-p/4146353). + diff --git a/content/learning-paths/servers-and-cloud-computing/csp/azure.md b/content/learning-paths/servers-and-cloud-computing/csp/azure.md index bd8a61779f..44d1e8fedf 100644 --- a/content/learning-paths/servers-and-cloud-computing/csp/azure.md +++ b/content/learning-paths/servers-and-cloud-computing/csp/azure.md @@ -11,7 +11,7 @@ layout: "learningpathall" As with most cloud service providers, Azure offers a pay-as-you-use [pricing policy](https://azure.microsoft.com/en-us/pricing/), including a number of [free](https://azure.microsoft.com/en-us/free/) services. -This guide is to help you get started with [Virtual Machines](https://azure.microsoft.com/en-us/products/virtual-machines/), using Arm-based [Ampere](https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/) processors. This is a general-purpose compute platform, essentially your own personal computer in the cloud. +This guide is to help you get started with [Virtual Machines](https://azure.microsoft.com/en-us/products/virtual-machines/), using Arm-based VMs available in Azure. Microsoft Azure offers two generations of Arm-based VMs. The latest generation is based on [Azure Cobalt 100 processors](https://techcommunity.microsoft.com/t5/azure-compute-blog/announcing-the-preview-of-new-azure-vms-based-on-the-azure/ba-p/4146353). The previous generation VMs are based on [Ampere](https://azure.microsoft.com/en-us/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/) processors. This is a general-purpose compute platform, essentially your own personal computer in the cloud. Full [documentation and quickstart guides](https://learn.microsoft.com/en-us/azure/virtual-machines/) are available. diff --git a/content/learning-paths/servers-and-cloud-computing/migration/java.md b/content/learning-paths/servers-and-cloud-computing/migration/java.md index 68b3362cfb..56df9ac46d 100644 --- a/content/learning-paths/servers-and-cloud-computing/migration/java.md +++ b/content/learning-paths/servers-and-cloud-computing/migration/java.md @@ -119,7 +119,9 @@ The default can be changed on the command line with either `-XX:ThreadStackSize= Usually, there's no need to change the default stack size, because the thread stack will be committed as it grows. -There is one situation to be aware of. If Transparent Huge Pages (THP) are set to always, the page size matches the default stack size. In this case the full stack size is commit to memory. If you have a very high number of threads the memory usage will be large. +## Transparent Huge Pages + +If Transparent Huge Pages (THP) are set to always, the page size matches the default stack size. In this case, the full stack size is committed to memory. If you have a very high number of threads the memory usage will be large. To mitigate this issue, you can either manually change the stack size using the flags or change the THP setting to madvise. diff --git a/content/migration/_index.md b/content/migration/_index.md index c15d8bdef3..d84f05785c 100644 --- a/content/migration/_index.md +++ b/content/migration/_index.md @@ -118,7 +118,8 @@ Which tools are available for building and running containers on Arm servers? | Docker | [Learn how to use Docker](https://learn.arm.com/learning-paths/cross-platform/docker/) | [How to build cloud-native applications for multi-architecture infrastructure](https://stackoverflow.blog/2024/02/05/how-to-build-cloud-native-applications-for-multi-architecture-infrastructure/) | AWS CodeBuild | [Build and share Docker images using AWS CodeBuild](https://learn.arm.com/learning-paths/servers-and-cloud-computing/codebuild/) | | | Docker Build Cloud | [Build multi-architecture container images with Docker Build Cloud](https://learn.arm.com/learning-paths/cross-platform/docker-build-cloud/) | [Supercharge your Arm builds with Docker Build Cloud: Efficiency meets performance](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/supercharge-arm-builds-with-docker-build-cloud) | -| GitHub Actions | [Build multi-architecture container images with GitHub Arm-hosted runners](https://learn.arm.com/learning-paths/cross-platform/github-arm-runners/) | | +| GitHub Actions (GitHub runners) | [Build multi-architecture container images with GitHub Arm-hosted runners](https://learn.arm.com/learning-paths/cross-platform/github-arm-runners/) | [Arm64 on GitHub Actions: Powering faster, more efficient build systems](https://github.blog/news-insights/product-news/arm64-on-github-actions-powering-faster-more-efficient-build-systems/) | +| GitHub Actions (AWS Graviton runners) | [Managed, self-hosted Arm runners for GitHub Actions](https://learn.arm.com/learning-paths/servers-and-cloud-computing/github-actions-runner/) | | {{< /tab >}} @@ -132,10 +133,10 @@ Which programming languages work on Arm servers? - Nearly all of them. | Rust | [Rust Install Guide](https://learn.arm.com/install-guides/rust/) | [Neon Intrinsics in Rust](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/rust-neon-intrinsics) | | Java | [Java Install Guide](https://learn.arm.com/install-guides/java/) | [Improving Java performance on Neoverse N1 systems](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/java-performance-on-neoverse-n1) | | | [Migrating Java applications](https://learn.arm.com/learning-paths/servers-and-cloud-computing/migration/java/) | [Java Vector API on AArch64](https://community.arm.com/arm-community-blogs/b/high-performance-computing-blog/posts/java-vector-api-on-aarch64) | -| | | AWS: [Java on Graviton](https://github.com/aws/aws-graviton-getting-started/blob/main/java.md) | -| | | Azure: [Optimizing Java Workloads on Azure General Purpose D-series v5 VMs with Microsoft’s Build of OpenJDK](https://techcommunity.microsoft.com/t5/azure-compute-blog/optimizing-java-workloads-on-azure-general-purpose-d-series-v5/ba-p/3827610) | -| | | Ampere: [Improving Java performance on OCI Ampere A1 compute instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/performance-of-specjbb2015-on-oci-ampere-a1-compute-instances) | -| Go | [Go Install Guide](https://learn.arm.com/install-guides/go/) | AWS: [Making your Go workloads up to 20% faster with Go 1.18 and AWS Graviton](https://aws.amazon.com/blogs/compute/making-your-go-workloads-up-to-20-faster-with-go-1-18-and-aws-graviton/)| +| | | [Java on Graviton](https://github.com/aws/aws-graviton-getting-started/blob/main/java.md) | +| | | [Optimizing Java Workloads on Azure General Purpose D-series v5 VMs with Microsoft’s Build of OpenJDK](https://techcommunity.microsoft.com/t5/azure-compute-blog/optimizing-java-workloads-on-azure-general-purpose-d-series-v5/ba-p/3827610) | +| | | [Improving Java performance on OCI Ampere A1 compute instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/performance-of-specjbb2015-on-oci-ampere-a1-compute-instances) | +| Go | [Go Install Guide](https://learn.arm.com/install-guides/go/) | [Making your Go workloads up to 20% faster with Go 1.18 and AWS Graviton](https://aws.amazon.com/blogs/compute/making-your-go-workloads-up-to-20-faster-with-go-1-18-and-aws-graviton/)| | .NET | [.NET Install Guide](https://learn.arm.com/install-guides/dotnet/) | [Arm64 Performance Improvements in .NET 7](https://devblogs.microsoft.com/dotnet/arm64-performance-improvements-in-dotnet-7/) | | Python | | [Python on Arm](https://community.arm.com/arm-community-blogs/b/tools-software-ides-blog/posts/python-on-arm)| | PHP | | [Improving performance of PHP for Arm64 and impact on AWS Graviton2 based EC2 instances](https://aws.amazon.com/blogs/compute/improving-performance-of-php-for-arm64-and-impact-on-amazon-ec2-m6g-instances/) | @@ -147,20 +148,20 @@ Which key libraries are optimized for Arm servers? | Library/Framework | Learn More | Blogs | |-------------------|------------|-------| -| x264/x265 | [Run x265 (H.265 codec) on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/codec/) | AWS: [Improve video encoding price/performance by up to 36% with Arm Neoverse based Amazon EC2 C6g instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/thirty-six-percent-better-video-encoding-with-aws-graviton2_2d00_based-c6g) | -| | | AWS: [Reduce H.265 High-Res Encoding Costs by over 80% with AWS Graviton2](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/reduce-h-265-high-res-encoding-costs-by-over-80-with-aws-graviton2-1207706725) | -| | | Ampere: [Ampere Altra Max Delivers Sustainable High-Resolution H.265 Encoding](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/ampere-altra-max-delivers-sustainable-high-resolution-h-265-video-encoding-without-compromise) | -| | | Ampere: [OCI Ampere A1 Compute instances can significantly reduce video encoding costs versus modern CPUs](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/oracle-cloud-infrastructure-arm-based-a1) | +| x264/x265 | [Run x265 (H.265 codec) on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/codec/) | [Improve video encoding price/performance by up to 36% with Arm Neoverse based Amazon EC2 C6g instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/thirty-six-percent-better-video-encoding-with-aws-graviton2_2d00_based-c6g) | +| | | [Reduce H.265 High-Res Encoding Costs by over 80% with AWS Graviton2](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/reduce-h-265-high-res-encoding-costs-by-over-80-with-aws-graviton2-1207706725) | +| | | [Ampere Altra Max Delivers Sustainable High-Resolution H.265 Encoding](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/ampere-altra-max-delivers-sustainable-high-resolution-h-265-video-encoding-without-compromise) | +| | | [OCI Ampere A1 Compute instances can significantly reduce video encoding costs versus modern CPUs](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/oracle-cloud-infrastructure-arm-based-a1) | | ArmPL | [Arm Performance Libraries install guide](https://learn.arm.com/install-guides/armpl/) | [Arm Compiler for Linux and Arm Performance Libraries 24.04](https://community.arm.com/arm-community-blogs/b/high-performance-computing-blog/posts/arm-compiler-for-linux-and-arm-performance-libraries-24-04) | | ArmRAL | [Get started with the Arm 5G RAN Acceleration Library (ArmRAL)](https://learn.arm.com/learning-paths/servers-and-cloud-computing/ran/) | [The next chapter for Arm RAN Acceleration Library: Open-sourcing the code base & accelerating adoption](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/arm-ral-is-now-open-source) | | OpenSSL | | | -| VP9 | | AWS: [Arm-based cloud instances outperform x86 instances by up to 64% on VP9 encoding](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/arm-outperforms-x86-by-up-to-64-percent-on-vp9) | +| VP9 | [Run the AV1 and VP9 codecs on Arm Linux](https://learn.arm.com/learning-paths/servers-and-cloud-computing/codec1/) | [Arm-based cloud instances outperform x86 instances by up to 64% on VP9 encoding](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/arm-outperforms-x86-by-up-to-64-percent-on-vp9) | | ISA-L | | | | IPSEC-MB | | | -| AV1 | | | -| SLEEF | | | -| AES | | AWS: [AWS Graviton3 delivers leading AES-GCM encryption performance](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/aes-gcm-optimizations-for-armv8-4-on-neoverse-v1-graviton3) | -| Snappy | [Measure performance of compression libraries on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/snappy/) | AWS: [Comparing data compression algorithm performance on AWS Graviton2](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/comparing-data-compression-algorithm-performance-on-aws-graviton2-342166113) | +| AV1 | [Run the AV1 and VP9 codecs on Arm Linux](https://learn.arm.com/learning-paths/servers-and-cloud-computing/codec1/) | | +| SLEEF | | [A New Pulse for SLEEF](https://sleef.org/2024/10/02/new-pulse.html) | +| AES | | [AWS Graviton3 delivers leading AES-GCM encryption performance](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/aes-gcm-optimizations-for-armv8-4-on-neoverse-v1-graviton3) | +| Snappy | [Measure performance of compression libraries on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/snappy/) | [Comparing data compression algorithm performance on AWS Graviton2](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/comparing-data-compression-algorithm-performance-on-aws-graviton2-342166113) | | Cloudflare zlib | [Learn how to build and use Cloudflare zlib on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/zlib/) | | {{< /tab >}} @@ -171,16 +172,16 @@ Which databases are available on Arm servers? | Database | Learning Paths | Other Content (Blogs/Videos) | |-----------|----------------|----------------------------------------| -| MySQL | AWS: [Deploy WordPress with MySQL on Elastic Kubernetes Service (EKS)](https://learn.arm.com/learning-paths/servers-and-cloud-computing/eks/) | | +| MySQL | [Deploy WordPress with MySQL on Elastic Kubernetes Service (EKS)](https://learn.arm.com/learning-paths/servers-and-cloud-computing/eks/) | | | MySQL | [Learn how to deploy MySQL](https://learn.arm.com/learning-paths/servers-and-cloud-computing/mysql/) | | | MySQL | [Benchmarking MySQL with Sysbench](https://learn.arm.com/learning-paths/servers-and-cloud-computing/mysql_benchmark/) | | | MySQL | [Learn how to Tune MySQL](https://learn.arm.com/learning-paths/servers-and-cloud-computing/mysql_tune/) | | | PostgreSQL | [Learn how to deploy PostgreSQL](https://learn.arm.com/learning-paths/servers-and-cloud-computing/postgresql/) | | | Flink | [Benchmark the performance of Flink on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/flink/) | -| Clickhouse | [Measure performance of ClickHouse on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/clickhouse/) | AWS: [Improve ClickHouse Performance up to 26% by using AWS Graviton3](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/improve-clickhouse-performance-up-to-26-by-using-aws-graviton3) | +| Clickhouse | [Measure performance of ClickHouse on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/clickhouse/) | [Improve ClickHouse Performance up to 26% by using AWS Graviton3](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/improve-clickhouse-performance-up-to-26-by-using-aws-graviton3) | | MongoDB | [Test the performance of MongoDB on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/mongodb/) | [MongoDB performance on Arm Neoverse based AWS Graviton2 processors](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/mongodb-performance-on-aws-with-the-arm-graviton2) | -| Redis | [Deploy Redis on Arm](https://learn.arm.com/learning-paths/servers-and-cloud-computing/redis/) | Alibaba: [Improve Redis performance up to 36% by deploying on Alibaba Cloud Yitian 710 instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/improve-redis-performance-by-deploying-on-alibaba-cloud-yitian-710-instances) | -| Spark | AWS: [Learn how to deploy Spark on AWS Graviton2](https://learn.arm.com/learning-paths/servers-and-cloud-computing/spark/) | AWS: [Spark on AWS Graviton2 best practices: K-Means clustering case study](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/optimize-spark-on-aws-graviton2-best-practices-k-means-clustering) | +| Redis | [Deploy Redis on Arm](https://learn.arm.com/learning-paths/servers-and-cloud-computing/redis/) | [Improve Redis performance up to 36% by deploying on Alibaba Cloud Yitian 710 instances](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/improve-redis-performance-by-deploying-on-alibaba-cloud-yitian-710-instances) | +| Spark | [Learn how to deploy Spark on AWS Graviton2](https://learn.arm.com/learning-paths/servers-and-cloud-computing/spark/) | [Spark on AWS Graviton2 best practices: K-Means clustering case study](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/optimize-spark-on-aws-graviton2-best-practices-k-means-clustering) | | MariaDB | [Deploy MariaDB on Arm servers](https://learn.arm.com/learning-paths/servers-and-cloud-computing/mariadb/) | | Elasticsearch/Opensearch | | | Spark+Gluten+Velox | | @@ -192,7 +193,7 @@ Which databases are available on Arm servers? Which software helps me build web applications on Arm servers? | Software | Learning Paths | Other Content (Blogs/Videos) | |-----------|----------------|----------------------------------------| -| Nginx | [Learn how to deploy Nginx](https://learn.arm.com/learning-paths/servers-and-cloud-computing/nginx/) | AWS: [Nginx Performance on AWS Graviton3](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/nginx-performance-on-graviton-3) | +| Nginx | [Learn how to deploy Nginx](https://learn.arm.com/learning-paths/servers-and-cloud-computing/nginx/) | [Nginx Performance on AWS Graviton3](https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/nginx-performance-on-graviton-3) | | | [Learn how to tune Nginx](https://learn.arm.com/learning-paths/servers-and-cloud-computing/nginx_tune/) | | | Django | [Learn how to deploy a Django application](https://learn.arm.com/learning-paths/servers-and-cloud-computing/django/) | | diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/llm-chatbot/javascript--llm-chatbot.html b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/llm-chatbot/javascript--llm-chatbot.html index e1068ae189..7f6a661d16 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/llm-chatbot/javascript--llm-chatbot.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/demo-components/llm-chatbot/javascript--llm-chatbot.html @@ -645,7 +645,8 @@ } return readStream(); // Read the stream } else { - console.error('Error sending message to the server'); + console.error('Error sending message to the server',error); + console.log(response); showPopupPostConnection("Problem sending message - try sending a new message.","error"); } }) diff --git a/themes/arm-design-system-hugo-theme/layouts/partials/general-formatting/metadata-table.html b/themes/arm-design-system-hugo-theme/layouts/partials/general-formatting/metadata-table.html index 20e160d0a0..03fdd74e1b 100644 --- a/themes/arm-design-system-hugo-theme/layouts/partials/general-formatting/metadata-table.html +++ b/themes/arm-design-system-hugo-theme/layouts/partials/general-formatting/metadata-table.html @@ -47,7 +47,9 @@ -{{ range $i, $row := getCSV "," "contributors.csv" }} +{{ $csv := resources.Get "contributors.csv" }} +{{ $csv_content := $csv | transform.Unmarshal (dict "delimiter" ",") }} +{{ range $i, $row := $csv_content }} {{/* {{ if gt (len $authors_multiple) 0 }} diff --git a/themes/arm-design-system-hugo-theme/layouts/stats/list.html b/themes/arm-design-system-hugo-theme/layouts/stats/list.html index 966a54fcdf..b2f8064dd9 100644 --- a/themes/arm-design-system-hugo-theme/layouts/stats/list.html +++ b/themes/arm-design-system-hugo-theme/layouts/stats/list.html @@ -15,7 +15,9 @@ {{$company := ""}} {{$author := replace $author_urlized "-" " " | title }} - {{ range $i, $row := getCSV "," "contributors.csv" }} + {{ $csv := resources.Get "contributors.csv" }} + {{ $csv_content := $csv | transform.Unmarshal (dict "delimiter" ",") }} + {{ range $i, $row := $csv_content }} {{ if eq $author (index $row 0)}} {{ $company = index $row 1 }}