diff --git a/profile/EdgeAI.md b/profile/EdgeAI.md
new file mode 100644
index 0000000..0987fa4
--- /dev/null
+++ b/profile/EdgeAI.md
@@ -0,0 +1,60 @@
+[**Arm Examples**](https://github.com/Arm-Examples/) » **Edge AI**
+
+# Edge AI (Machine Learning)
+
+Arm offers comprehensive tool and software support for Edge AI development targeting the [Cortex-M processor family](https://www.arm.com/products/silicon-ip-cpu?families=cortex-m&showall=true) and [Ethos-U NPU series](https://www.arm.com/products/silicon-ip-cpu?families=ethos%20npus). Simple machine learning algorithms even execute on an ultra-low-power Cortex-M0+ device while the Cortex-M52/55/85 processors with [Helium vector extension](https://www.arm.com/technologies/helium) are optimized for neural networks. Combining a Cortex-M processor with an Ethos-U NPU delivers up to 480 times performance uplift for ML workloads while maintaining minimal power consumption.
+
+## ML Frameworks for Cortex-M and Ethos-U
+
+The Arm software and tool ecosystem integrates seamlessly with popular ML frameworks including LiteRT (formerly TensorFlow Lite) and ExecuTorch (PyTorch-based).
+
+[")](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow)
+
+The **[LiteRT (TensorFlow Lite Runtime)](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow)** is a production-grade inference runtime optimized for Cortex-M microcontrollers, optionally with Ethos-U.
+
+- Proven for Cortex-M only and Cortex-M + Ethos-U
+- Optimized kernels for constrained memory
+- Stable operator coverage for classic ML models
+- Strong ecosystem, ready today
+
+Explore the [LiteRT software pack for Arm Cortex-M](https://github.com/MDK-Packs/tensorflow-pack) with ready-to-use workflow templates and examples.
+
+[")](https://github.com/Arm-Examples/CMSIS-Executorch)
+
+The **ExecuTorch (Lightweight PyTorch Runtime)** is today the dominant framework in research, making new ML model development and sharing easier.
+
+- Strong for LLMs, vision, multimodal, generative AI
+- ML developer friendly with modern tooling
+- Rapidly growing eco-system momentum
+- Now maturing for Cortex-M + Ethos-U targets
+
+Get started with [CMSIS-Executorch](https://github.com/Arm-Examples/CMSIS-Executorch) for PyTorch-based AI models on Cortex-M and Ethos-U targets.
+
+## ML Runtime System
+
+ML models from ecosystem partners or open-source model zoos such as Hugging Face are quantized and optimized for embedded deployment. CMSIS-NN executes these optimized models on Cortex-M processors. For Ethos-U NPU targets, Vela converts model operations for NPU acceleration, while operations that cannot be converted continue to run on Cortex-M via CMSIS-NN.
+
+
+
+Using these workflows developers can deploy trained models from PyTorch, TensorFlow, and other frameworks onto Arm targets with optimal performance and energy efficiency, enabling intelligent capabilities in IoT devices, wearables, industrial sensors, and other edge computing applications.
+
+The [Vela compiler](https://pypi.org/project/ethos-u-vela/) provides detailed information on optimization output, diagnostics, and performance timing analysis for Ethos-U NPU targets.
+
+## Embedded Development Workflow
+
+The embedded development workflow for Edge AI applications requires training data for ML model training that executes on a host or MLOps system. Once trained, the ML model is deployed to the embedded target as described above. [Keil MDK](https://www.keil.arm.com/keil-mdk/) provides all tools that are required to develop and integrate the optimized ML models with the application code, device drivers, RTOS, and middleware components. Vela, Arm Compiler and FVP simulation models (the MLOps backend tools) are provided via Docker containers and CMSIS-ExecuTorch that includes also the runtime system.
+
+The [SDS-Framework](https://www.keil.arm.com/packs/sds-arm) is a workbench for ML model development. You may capture and record real-world sensor, audio, or video data streams directly from your target hardware for ML model training. SDS enables data playback and validation of the ML model output against performance indicators. The option to run tests on hardware or FVP simulation models enables automated testing and CI/MLOps workflows without requiring physical target hardware at every step.
+
+
+
+Discover [Arm's ML ecosystem partners](https://www.arm.com/partners/ai-and-ml) offering optimized models, tools, and solutions for edge AI applications.
+
+## More Edge AI Developer Resources
+
+- [CMSIS-NN](https://github.com/ARM-software/CMSIS-NN) - Optimized neural network kernels for Cortex-M processors
+- [TensorFlow Runtime System](https://www.keil.arm.com/packs/tensorflow-lite-micro-tensorflow) - LiteRT software pack with examples and integration support
+- [ML Evaluation Kit (MLEK)](https://www.keil.arm.com/packs/arm-mlek-arm) - Pre-configured ML projects and template applications for microcontroller targets
+- [SDS-Framework](https://github.com/ARM-software/SDS-Framework) - Workbench for capturing sensor data, validating ML models, and enabling CI/MLOps workflows
+- [CMSIS-Executorch](https://github.com/Arm-Examples/CMSIS-Executorch) - ExecuTorch integration for Cortex-M and Ethos-U targets
+- [CMSIS-Zephyr-Executorch](https://github.com/Arm-Examples/CMSIS-Zephyr-Executorch) - ExecuTorch integration for Zephyr RTOS applications
diff --git a/profile/Embedded_Development.png b/profile/Embedded_Development.png
new file mode 100644
index 0000000..9ce5cb6
Binary files /dev/null and b/profile/Embedded_Development.png differ
diff --git a/profile/ImageSource/images.pptx b/profile/ImageSource/images.pptx
index ee3c425..0c410e6 100644
Binary files a/profile/ImageSource/images.pptx and b/profile/ImageSource/images.pptx differ
diff --git a/profile/LiteRT.png b/profile/LiteRT.png
new file mode 100644
index 0000000..634678e
Binary files /dev/null and b/profile/LiteRT.png differ
diff --git a/profile/ML_Model_Runtime.png b/profile/ML_Model_Runtime.png
new file mode 100644
index 0000000..c231dd8
Binary files /dev/null and b/profile/ML_Model_Runtime.png differ
diff --git a/profile/PyTorch.png b/profile/PyTorch.png
new file mode 100644
index 0000000..131cecc
Binary files /dev/null and b/profile/PyTorch.png differ
diff --git a/profile/README.md b/profile/README.md
index 48aa06e..49c3bce 100644
--- a/profile/README.md
+++ b/profile/README.md
@@ -24,7 +24,9 @@ Keil Studio is designed for all types of embedded projects, ranging from bare-me
[
](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")
-Comprehensive machine learning capabilities are available with ML Evaluation Kit (MLEK), Synchronous Data Streaming (SDS) Framework, LiteRT (TensorFlow), and Executourch that utilizes CMSIS-NN (for Cortex-M) or Vela (for Ethos-U). **[Watch this video to learn more...](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")**
+Arm offers for Edge AI development on the Cortex-M processor family and Ethos-U NPU series comprehensive tool and software support.
+
+**[Watch this video](https://armkeil.blob.core.windows.net/developer/Files/videos/KeilStudio/20250812_Multicore_Alif.mp4?#t=07:22 "Development flow for optimized Edge AI devices")**, explore the projects below or read the section [**Edge AI**](EdgeAI.md) to learn more.