diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_index.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_index.md new file mode 100644 index 0000000000..5bacfc1f0b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_index.md @@ -0,0 +1,60 @@ +--- +title: Run MongoDB on the Microsoft Azure Cobalt 100 processors + +draft: true +cascade: + draft: true + +minutes_to_complete: 30 + +who_is_this_for: This Learning Path is designed for software developers looking to migrate their MongoDB workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors. + +learning_objectives: + - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image. + - Deploy the MongoDB on an Azure ubuntu virtual machine. + - Perform MongoDB baseline testing and benchmarking on both x86_64 and Arm64 virtual machine. + +prerequisites: + - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6). + - Basic understanding of Linux command line. + - Familiarity with the [MongoDB architecture](https://www.mongodb.com/) and deployment practices on Arm64 platforms. + +author: Jason Andrews + +### Tags +skilllevels: Introductory +subjects: Databases +cloud_service_providers: Microsoft Azure + +armips: + - Neoverse + +tools_software_languages: + - MongoDB + - mongotop + - mongostat + +operatingsystems: + - Linux + +further_reading: + - resource: + title: MongoDB Manual + link: https://www.mongodb.com/docs/manual/ + type: documentation + - resource: + title: MongoDB Performance Tool + link: https://github.com/idealo/mongodb-performance-test#readme + type: documentation + - resource: + title: MongoDB on Azure + link: https://azure.microsoft.com/en-us/solutions/mongodb + type: documentation + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/background.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/background.md new file mode 100644 index 0000000000..fa257f0c98 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/background.md @@ -0,0 +1,20 @@ +--- +title: "Overview" + +weight: 2 + +layout: "learningpathall" +--- + +## Cobalt 100 Arm-based processor + +Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance. + +To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). + +## MongoDB +MongoDB is a popular open-source NoSQL database designed for high performance, scalability, and flexibility. + +It stores data in JSON-like BSON documents, making it ideal for modern applications that require dynamic, schema-less data structures. + +MongoDB is widely used for web, mobile, IoT, and real-time analytics workloads. Learn more from the [MongoDB official website](https://www.mongodb.com/) and its [official documentation](https://www.mongodb.com/docs/). diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/baseline-testing.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/baseline-testing.md new file mode 100644 index 0000000000..e3e407e620 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/baseline-testing.md @@ -0,0 +1,214 @@ +--- +title: MongoDB Baseline Testing +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +### Baseline testing of MongoDB +Perform baseline testing by verifying MongoDB is running, logging into the shell, executing a few test queries, and monitoring live performance. This ensures the database is functioning correctly before starting any benchmarks. + +1. Verify Installation & Service Health + +```console +ps -ef | grep mongod +mongod --version +netstat -tulnp | grep 27017 +``` +- **ps -ef | grep mongod** – Checks if the MongoDB server process is running. +- **mongod --version** – Shows the version of MongoDB installed. +- **netstat -tulnp | grep 27017** – Checks if MongoDB is listening for connections on its default port 27017. + +You should see an output similar to: + +```output +mongod --version +netstat -tulnp | grep 27017 +ubuntu 4288 1 0 10:40 ? 00:00:01 mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --fork +ubuntu 4545 1764 0 10:43 pts/0 00:00:00 grep --color=auto mongod +db version v8.0.12 +Build Info: { + "version": "8.0.12", + "gitVersion": "b60fc6875b5fb4b63cc0dbbd8dda0d6d6277921a", + "openSSLVersion": "OpenSSL 3.0.13 30 Jan 2024", + "modules": [], + "allocator": "tcmalloc-google", + "environment": { + "distmod": "ubuntu2404", + "distarch": "aarch64", + "target_arch": "aarch64" + } +} +(Not all processes could be identified, non-owned process info + will not be shown, you would have to be root to see it all.) +tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 4288/mongod +``` + +2. Storage and Health Check + +Run the command below to check how fast your storage can **randomly read small 4KB chunks** from a 100 MB file for 30 seconds, using one job, and then show a summary report: + +```console +fio --name=baseline --rw=randread --bs=4k --size=100M --numjobs=1 --time_based --runtime=30 --group_reporting +``` +You should see an output similar to: + +```output +baseline: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 +fio-3.36 +Starting 1 process +Jobs: 1 (f=1): [r(1)][100.0%][r=14.8MiB/s][r=3799 IOPS][eta 00m:00s] +baseline: (groupid=0, jobs=1): err= 0: pid=3753: Mon Sep 1 10:25:07 2025 + read: IOPS=4255, BW=16.6MiB/s (17.4MB/s)(499MiB/30001msec) + clat (usec): min=88, max=46246, avg=234.23, stdev=209.81 + lat (usec): min=88, max=46246, avg=234.28, stdev=209.81 + clat percentiles (usec): + | 1.00th=[ 99], 5.00th=[ 111], 10.00th=[ 126], 20.00th=[ 167], + | 30.00th=[ 190], 40.00th=[ 229], 50.00th=[ 243], 60.00th=[ 253], + | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 330], + | 99.00th=[ 416], 99.50th=[ 490], 99.90th=[ 799], 99.95th=[ 1106], + | 99.99th=[ 3884] + bw ( KiB/s): min=14536, max=19512, per=100.00%, avg=17046.10, stdev=1359.69, samples=59 + iops : min= 3634, max= 4878, avg=4261.53, stdev=339.92, samples=59 + lat (usec) : 100=1.27%, 250=56.61%, 500=41.65%, 750=0.34%, 1000=0.06% + lat (msec) : 2=0.04%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% + cpu : usr=0.33%, sys=2.93%, ctx=127668, majf=0, minf=8 + IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% + submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% + issued rwts: total=127661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 + latency : target=0, window=0, percentile=100.00%, depth=1 + +Run status group 0 (all jobs): + READ: bw=16.6MiB/s (17.4MB/s), 16.6MiB/s-16.6MiB/s (17.4MB/s-17.4MB/s), io=499MiB (523MB), run=30001-30001msec + +Disk stats (read/write): + sda: ios=127195/29, sectors=1017560/552, merge=0/15, ticks=29133/8, in_queue=29151, util=96.37% +``` +The output shows how fast it read data (**16.6 MB/s**) and how many reads it did per second (**~4255 IOPS**), which tells you how responsive your storage is for random reads. + +3. Connectivity and CRUD Sanity Check + +```console +mongosh --host localhost --port 27017 +``` + +Inside shell: + +```javascript +use baselineDB +db.testCollection.insertOne({ name: "baseline-check", value: 1 }) +db.testCollection.find() +db.testCollection.updateOne({ name: "baseline-check" }, { $set: { value: 2 } }) +db.testCollection.deleteOne({ name: "baseline-check" }) +exit +``` +These commands create a test record, read it, update its value, and then delete it a simple way to check if MongoDB’s basic **add, read, update, and delete** operations are working. + +You should see an output similar to: + +```output +test> use baselineDB +switched to db baselineDB +baselineDB> db.testCollection.insertOne({ name: "baseline-check", value: 1 }) +{ + acknowledged: true, + insertedId: ObjectId('689acdae6a86b49bca74e39a') +} +baselineDB> db.testCollection.find() +[ + { + _id: ObjectId('689acdae6a86b49bca74e39a'), + name: 'baseline-check', + value: 1 + } +] +baselineDB> db.testCollection.updateOne({ name: "baseline-check" }, { $set: { value: 2 } }) +... +{ + acknowledged: true, + insertedId: null, + matchedCount: 1, + modifiedCount: 1, + upsertedCount: 0 +} +baselineDB> db.testCollection.deleteOne({ name: "baseline-check" }) +... +{ acknowledged: true, deletedCount: 1 } +``` + +4. Basic Query Performance Test + +```console +mongosh --eval ' +db = db.getSiblingDB("baselineDB"); +for (let i=0; i<1000; i++) { db.perf.insertOne({index:i, value:Math.random()}) }; +var start = new Date(); +db.perf.find({ value: { $gt: 0.5 } }).count(); +print("Query Time (ms):", new Date() - start); +' +``` +The command connected to MongoDB, switched to the **baselineDB** database, inserted **1,000 documents** into the perf collection, and then measured the execution time for counting documents where **value > 0.5**. The final output displayed the **query execution time** in milliseconds. + +You should see an output similar to: + +```output +Query Time (ms): 2 +``` + +5. Index Creation Speed Test + +```console +mongosh --eval ' +db = db.getSiblingDB("baselineDB"); +var start = new Date(); +db.perf.createIndex({ value: 1 }); +print("Index Creation Time (ms):", new Date() - start); +' +``` +The test connected to MongoDB, switched to the **baselineDB** database, and created an index on the **value** field in the **perf** collection. The index creation process completed in **22 milliseconds**, indicating relatively fast index building for the dataset size. + +You should see an output similar to: + +```output +Index Creation Time (ms): 22 +``` + +6. Concurrency Smoke Test + +```console +for i in {1..5}; do + /usr/bin/mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' & +done +wait +``` +This command runs **five MongoDB insert jobs at the same time**, each adding **1,000 new records** to the **baselineDB.concurrent** collection. +It’s a quick way to test how MongoDB handles **multiple users writing data at once**. + +You should see an output similar to: + +```output +[1] 3818 +[2] 3819 +[3] 3820 +[4] 3821 +[5] 3822 +switched to db baselineDB; +[1] Done mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' +switched to db baselineDB; +switched to db baselineDB; +switched to db baselineDB; +[2] Done mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' +[4]- Done mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' +[3]- Done mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' +switched to db baselineDB; +[5]+ Done mongosh --eval 'use baselineDB; db.concurrent.insertMany([...Array(1000).keys()].map(k => ({ test: k, ts: new Date() })))' +``` + +**Five parallel MongoDB shell sessions** were executed, each inserting **1,000** test documents into the baselineDB.concurrent collection. All sessions completed successfully, confirming that concurrent data insertion works as expected. + +The above operations confirm that MongoDB is installed successfully and is functioning as expected on the Azure Cobalt 100 (Arm64) environment. + +Now, your MongoDB instance is ready for further benchmarking and production use. diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/benchmarking.md new file mode 100644 index 0000000000..4d3be4d505 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/benchmarking.md @@ -0,0 +1,261 @@ +--- +title: MongoDB Benchmarking +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Benchmark MongoDB with **mongotop** and **mongostat** + +This guide will help the user measure MongoDB’s performance in real time. +The user will install the official MongoDB database tools, start MongoDB, run a script to simulate heavy load, and watch the database’s live performance using **mongotop** and **mongostat**. + +1. Install MongoDB Database Tools + +```console +wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu2404-arm64-100.13.0.deb +sudo apt update +sudo apt install -y ./mongodb-database-tools-ubuntu2404-arm64-100.13.0.deb +echo 'export PATH=$PATH:~/mongodb-database-tools-ubuntu2404-arm64-100.13.0/bin' >> ~/.bashrc +source ~/.bashrc +``` +These commands download and unpack MongoDB’s official monitoring tools (**mongotop** & **mongostat**), then add them to your PATH so you can run them from any terminal. + +2. Verify the Installation + +```console +mongotop --version +mongostat --version +``` +This checks that both tools were installed correctly and are ready to use. + +You should see an output similar to: +```output +mongostat --version +mongotop version: 100.13.0 +git version: 23008ff975be028544710a5da6ae749dc7e90ab7 +Go version: go1.23.11 + os: linux + arch: arm64 + compiler: gc +mongostat version: 100.13.0 +git version: 23008ff975be028544710a5da6ae749dc7e90ab7 +Go version: go1.23.11 + os: linux + arch: arm64 + compiler: gc +``` + +3. Start MongoDB Server + +```console +mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --fork +``` +These commands create a folder for MongoDB’s data, then start the database server in the background, allowing connections from any IP, and save logs for troubleshooting. + +4. Create a Long-Running Load Script for Benchmarking + +Save this script file as **long_system_load.js**: + +```javascript +function randomString(len) { + return Math.random().toString(36).substring(2, 2 + len); +} + +var systemCollections = [ + { db: "admin", coll: "atlascli" }, + { db: "config", coll: "system_sessions_bench" }, + { db: "config", coll: "transactions_bench" }, + { db: "local", coll: "system_replset_bench" }, + { db: "benchmarkDB", coll: "testCollection" }, + { db: "benchmarkDB", coll: "cursorTest" }, + { db: "test", coll: "atlascli" }, + { db: "test", coll: "system_sessions_bench" }, + { db: "test", coll: "admin_system_version_test" } +]; + +systemCollections.forEach(function(ns) { + let col = db.getSiblingDB(ns.db).getCollection(ns.coll); + col.drop(); + for (let i = 0; i < 100; i++) { + col.insertOne({ rnd: randomString(10), ts: new Date(), idx: i }); + } + col.findOne(); +}); + +var totalCycles = 50; +var pauseMs = 1000; + +for (let cycle = 0; cycle < totalCycles; cycle++) { + systemCollections.forEach(function(ns) { + let col = db.getSiblingDB(ns.db).getCollection(ns.coll); + + col.insertOne({ cycle, action: "insert", value: randomString(8), ts: new Date() }); + col.find({ cycle: { $lte: cycle } }).limit(10).toArray(); + col.updateMany({}, { $set: { updatedAt: new Date() } }); + col.deleteMany({ idx: { $gt: 80 } }); + + let cursor = col.find().batchSize(5); + while (cursor.hasNext()) { + cursor.next(); + } + }); + + print(`Cycle ${cycle + 1} / ${totalCycles} completed`); + sleep(pauseMs); +} + +print("=== Long load generation completed ==="); +``` + +This is the load generator script, it creates several collections and repeatedly **inserts, queries, updates** and **deletes** data. Running it simulates real application traffic so the monitors have something to measure. + +{{% notice Note %}} +Before proceeding, the load script and the monitoring tools must be run in separate terminals simultaneously. + +- The load script continuously generates activity in MongoDB, keeping the database busy with multiple operations. +- The mongotop and mongostat tools monitor and report this activity in real time as it happens. + +If all commands are run in the same terminal, the monitoring tools will only start after the script finishes, preventing real-time observation of MongoDB’s performance. +{{% /notice %}} + +### Run the load script (start the workload) — Terminal 1 + +```console +mongosh < long_system_load.js +``` + +This command tells the MongoDB shell to execute the entire script. The script will run through its cycles and print the progress while generating the read/write activity on the server. + +You should see an output similar to: +```output +test> // long_system_load.js + +test> // Run with: mongosh < long_system_load.js + +test> + +test> function randomString(len) { +... return Math.random().toString(36).substring(2, 2 + len); +... } +[Function: randomString] +test> + +test> // ---------- 1. Safe shadow "system-like" namespaces ---------- + +test> var systemCollections = [ +... { db: "admin", coll: "atlascli" }, +... { db: "config", coll: "system_sessions_bench" }, +... { db: "config", coll: "transactions_bench" }, +... { db: "local", coll: "system_replset_bench" }, +... { db: "benchmarkDB", coll: "testCollection" }, +... { db: "benchmarkDB", coll: "cursorTest" }, +... { db: "test", coll: "atlascli" }, +... { db: "test", coll: "system_sessions_bench" }, +... { db: "test", coll: "admin_system_version_test" } +... ]; + +test> + +test> // Create and warm up + +test> systemCollections.forEach(function(ns) { +... let col = db.getSiblingDB(ns.db).getCollection(ns.coll); +... col.drop(); +... for (let i = 0; i < 100; i++) { +... col.insertOne({ rnd: randomString(10), ts: new Date(), idx: i }); +... } +... col.findOne(); +... }); + +test> + +test> // ---------- 2. Generate load loop ---------- + +test> var totalCycles = 50; // increase this for longer runs + +test> var pauseMs = 1000; // 1 second pause between cycles + +test> + +test> for (let cycle = 0; cycle < totalCycles; cycle++) { +... systemCollections.forEach(function(ns) { +... let col = db.getSiblingDB(ns.db).getCollection(ns.coll); +... +... col.insertOne({ cycle, action: "insert", value: randomString(8), ts: new Date() }); +... col.find({ cycle: { $lte: cycle } }).limit(10).toArray(); +... col.updateMany({}, { $set: { updatedAt: new Date() } }); +... col.deleteMany({ idx: { $gt: 80 } }); +... +... let cursor = col.find().batchSize(5); +... while (cursor.hasNext()) { +... cursor.next(); +... } +... }); +... +... print(`Cycle ${cycle + 1} / ${totalCycles} completed`); +... sleep(pauseMs); +... } +Cycle 1 / 50 completed +Cycle 2 / 50 completed +Cycle 3 / 50 completed +Cycle 4 / 50 completed +Cycle 5 / 50 completed +Cycle 6 / 50 completed +Cycle 7 / 50 completed +Cycle 8 / 50 completed +Cycle 9 / 50 completed +Cycle 10 / 50 completed +Cycle 11 / 50 completed +Cycle 12 / 50 completed +Cycle 13 / 50 completed +Cycle 14 / 50 completed +Cycle 15 / 50 completed +Cycle 16 / 50 completed +Cycle 17 / 50 completed +Cycle 18 / 50 completed +Cycle 19 / 50 completed +Cycle 20 / 50 completed +Cycle 21 / 50 completed +Cycle 22 / 50 completed +Cycle 23 / 50 completed +Cycle 24 / 50 completed +Cycle 25 / 50 completed +Cycle 26 / 50 completed +Cycle 27 / 50 completed +Cycle 28 / 50 completed +Cycle 29 / 50 completed +Cycle 30 / 50 completed +Cycle 31 / 50 completed +Cycle 32 / 50 completed +Cycle 33 / 50 completed +Cycle 34 / 50 completed +Cycle 35 / 50 completed +Cycle 36 / 50 completed +Cycle 37 / 50 completed +Cycle 38 / 50 completed +Cycle 39 / 50 completed +Cycle 40 / 50 completed +Cycle 41 / 50 completed +Cycle 42 / 50 completed +Cycle 43 / 50 completed +Cycle 44 / 50 completed +Cycle 45 / 50 completed +Cycle 46 / 50 completed +Cycle 47 / 50 completed +Cycle 48 / 50 completed +Cycle 49 / 50 completed +Cycle 50 / 50 completed + +test> + +test> print("=== Long load generation completed ==="); +=== Long load generation completed === + +``` + +The load has been generated successfully. Now, you can proceed with the monitoring: + +- **mongotop** to observe activity per collection. +- **mongostat** to monitor overall operations per second, memory usage, and network activity. diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/create-instance.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/create-instance.md new file mode 100644 index 0000000000..9571395aa2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/create-instance.md @@ -0,0 +1,50 @@ +--- +title: Create an Arm based cloud virtual machine using Microsoft Cobalt 100 CPU +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Introduction + +There are several ways to create an Arm-based Cobalt 100 virtual machine : the Microsoft Azure console, the Azure CLI tool, or using your choice of IaC (Infrastructure as Code). This guide will use the Azure console to create a virtual machine with Arm-based Cobalt 100 Processor. + +This learning path focuses on the general-purpose virtual machine of the D series. Please read the guide on [Dpsv6 size series](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series) offered by Microsoft Azure. + +If you have never used the Microsoft Cloud Platform before, please review the microsoft [guide to Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu). + +#### Create an Arm-based Azure Virtual Machine + +Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to "Virtual Machines". +1. Select "Create", and click on "Virtual Machine" from the drop-down list. +2. Inside the "Basic" tab, fill in the Instance details such as "Virtual machine name" and "Region". +3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture. +4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines") + +5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. +6. Fill in the Administrator username for your VM. +7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key. +8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules") + +9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following: + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM") + +10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources") + +11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal") + +{{% notice Note %}} + +To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). + +{{% /notice %}} diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/deploy.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/deploy.md new file mode 100644 index 0000000000..d339271840 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/deploy.md @@ -0,0 +1,127 @@ +--- +title: Install MongoDB and Mongosh +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## Install MongoDB and Mongosh on the Ubuntu Pro 24.04 LTS Arm instance + +Install MongoDB and mongosh on Ubuntu Pro 24.04 LTS Arm64 by downloading the binaries, setting up environment paths, configuring data and log directories, and starting the server for local access and verification. + +1. Install System Dependencies + +Install required system packages to support MongoDB: +```console +sudo apt update +sudo apt install -y curl wget tar fio openssl libcurl4 net-tools +``` + +2. Download and Extract MongoDB + +Fetch and unpack the MongoDB binaries for Arm64: +```console +wget https://fastdl.mongodb.org/linux/mongodb-linux-aarch64-ubuntu2404-8.0.12.tgz +tar -xvzf mongodb-linux-aarch64-ubuntu2404-8.0.12.tgz +sudo mv mongodb-linux-aarch64-ubuntu2404-8.0.12 /usr/local/mongodb +``` + +3. Add MongoDB to System PATH + +Enable running MongoDB from any terminal session: +```console +echo 'export PATH=/usr/local/mongodb/bin:$PATH' | sudo tee /etc/profile.d/mongodb.sh +source /etc/profile.d/mongodb.sh +``` + +4. Create a data and log directories + +Set up the database data directory: +```console +sudo mkdir -p /var/lib/mongo +sudo mkdir -p /var/log/mongodb +sudo chown -R $USER:$USER /var/lib/mongo /var/log/mongodb +``` + +5. Start MongoDB Server + +Start MongoDB manually: +```console +mongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --fork +``` + +6. Install mongosh + +**mongosh** is the MongoDB Shell used to interact with your MongoDB server. It provides a modern, user-friendly CLI for running queries and database operations. + +Download and install MongoDB’s command-line shell for Arm: +```console +wget https://downloads.mongodb.com/compass/mongosh-2.3.8-linux-arm64.tgz +tar -xvzf mongosh-2.3.8-linux-arm64.tgz +sudo mv mongosh-2.3.8-linux-arm64 /usr/local/mongosh +``` +Add mongosh to System `PATH` +```console +echo 'export PATH=/usr/local/mongosh/bin:$PATH' | sudo tee /etc/profile.d/mongosh.sh +source /etc/profile.d/mongosh.sh +``` + +### Verify MongoDB and mongosh Installation + +Check if MongoDB and mongosh is properly installed: +```console +mongod --version +mongosh --version +``` +You should see an output similar to: +```output +db version v8.0.12 +Build Info: { + "version": "8.0.12", + "gitVersion": "b60fc6875b5fb4b63cc0dbbd8dda0d6d6277921a", + "openSSLVersion": "OpenSSL 3.2.2 4 Jun 2024", + "modules": [], + "allocator": "tcmalloc-google", + "environment": { + "distmod": "rhel93", + "distarch": "aarch64", + "target_arch": "aarch64" + } +} +$ mongosh --version +2.3.8 +``` + +### Connect to MongoDB via mongosh + +Start interacting with MongoDB through its shell interface: +```console +mongosh mongodb://127.0.0.1:27017 +``` +You should see an output similar to: +```output +Current Mongosh Log ID: 68b573411523231d81a00aa0 +Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.8 +Using MongoDB: 8.0.12 +Using Mongosh: 2.3.8 +mongosh 2.5.7 is available for download: https://www.mongodb.com/try/download/shell + +For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/ + +------ + The server generated these startup warnings when booting + 2025-09-01T09:45:32.382+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem + 2025-09-01T09:45:33.012+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted + 2025-09-01T09:45:33.012+00:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip
to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning + 2025-09-01T09:45:33.012+00:00: Soft rlimits for open file descriptors too low + 2025-09-01T09:45:33.012+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile + 2025-09-01T09:45:33.012+00:00: For customers running the current memory allocator, we suggest changing the contents of the following sysfsFile + 2025-09-01T09:45:33.012+00:00: We suggest setting the contents of sysfsFile to 0. + 2025-09-01T09:45:33.012+00:00: Your system has glibc support for rseq built in, which is not yet supported by tcmalloc-google and has critical performance implications. Please set the environment variable GLIBC_TUNABLES=glibc.pthread.rseq=0 +------ +test> +``` + +MongoDB installation is complete. You can now proceed with the baseline testing. diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/final-vm.png b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/final-vm.png new file mode 100644 index 0000000000..5207abfb41 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/final-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance.png b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance.png new file mode 100644 index 0000000000..285cd764a5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance1.png b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance1.png new file mode 100644 index 0000000000..b9d22c352d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance1.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance4.png b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance4.png new file mode 100644 index 0000000000..2a0ff1e3b0 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/instance4.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/ubuntu-pro.png b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/ubuntu-pro.png new file mode 100644 index 0000000000..d54bd75ca6 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/images/ubuntu-pro.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongostat.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongostat.md new file mode 100644 index 0000000000..17fccf65e0 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongostat.md @@ -0,0 +1,130 @@ +--- +title: Monitor MongoDB with mongostat +weight: 8 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Monitoring MongoDB Performance using mongostat +This guide demonstrates real-time MongoDB monitoring using **mongostat** on Arm64 Azure virtual machines. It **shows low-latency, stable insert, query, update, and delete operations**, with consistent memory usage and network throughput, providing a quick health-and-performance overview during benchmarking. + +## Monitor with mongostat — Terminal 3 + +```console +mongostat 2 +``` +**mongostat** gives a one-line summary every 2 seconds of inserts, queries, updates, deletes, memory use and network I/O. It’s your quick health-and-throughput dashboard during the test. + +You should see an output similar to: +```output +insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time + *0 *0 *0 *0 0 4|0 0.0% 0.0% 0 3.53G 140M 0|0 0|0 664b 52.7k 6 Sep 4 04:57:16.761 + 50 *0 *0 *0 0 7|0 0.0% 0.0% 0 3.53G 141M 0|0 0|0 10.9k 57.8k 10 Sep 4 04:57:18.761 + 404 13 4 4 71 8|0 0.0% 0.0% 0 3.53G 143M 0|0 0|0 96.3k 114k 10 Sep 4 04:57:20.761 + 7 14 7 7 108 2|0 0.0% 0.0% 0 3.53G 143M 0|0 0|0 21.8k 118k 10 Sep 4 04:57:22.760 + 6 12 6 6 112 0|0 0.0% 0.0% 0 3.53G 143M 0|0 0|0 21.9k 120k 10 Sep 4 04:57:24.760 + 8 16 8 8 136 1|0 0.0% 0.0% 0 3.53G 144M 0|0 0|0 27.1k 137k 10 Sep 4 04:57:26.762 + 5 10 5 5 93 2|0 0.0% 0.0% 0 3.54G 144M 0|0 0|0 18.2k 111k 11 Sep 4 04:57:28.760 + 7 15 7 7 135 0|0 0.0% 0.0% 0 3.54G 144M 0|0 0|0 26.5k 139k 11 Sep 4 04:57:30.761 + 5 11 5 5 102 1|0 0.0% 0.0% 0 3.54G 144M 0|0 0|0 19.7k 118k 11 Sep 4 04:57:32.761 + 7 16 10 7 138 2|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 27.0k 143k 11 Sep 4 04:57:34.761 +insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time + 5 10 5 5 104 1|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 20.1k 121k 11 Sep 4 04:57:36.761 + 8 16 8 8 142 2|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 27.6k 144k 11 Sep 4 04:57:38.761 + 5 11 5 5 114 1|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 21.3k 125k 11 Sep 4 04:57:40.760 + 7 15 7 7 134 1|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 25.9k 141k 11 Sep 4 04:57:42.760 + 5 11 5 5 126 1|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 23.6k 133k 11 Sep 4 04:57:44.761 + 6 12 6 6 128 1|0 0.0% 0.0% 0 3.54G 145M 0|0 0|0 24.4k 136k 11 Sep 4 04:57:46.761 + 6 13 6 6 140 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 26.5k 144k 11 Sep 4 04:57:48.762 + 6 12 6 6 114 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 21.7k 128k 11 Sep 4 04:57:50.762 + 7 15 7 7 164 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 30.4k 157k 11 Sep 4 04:57:52.761 + 5 10 5 5 100 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 18.9k 118k 11 Sep 4 04:57:54.761 +insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time + 8 16 8 8 182 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 34.0k 172k 11 Sep 4 04:57:56.761 + 4 8 4 4 98 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 18.3k 116k 11 Sep 4 04:57:58.762 + 9 18 9 9 198 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 36.4k 179k 11 Sep 4 04:58:00.760 + 4 9 4 4 99 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 18.3k 117k 11 Sep 4 04:58:02.760 + 8 17 8 8 202 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 37.0k 183k 11 Sep 4 04:58:04.762 + 4 9 4 4 103 2|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 19.0k 119k 11 Sep 4 04:58:06.760 + 8 15 7 7 183 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 33.5k 171k 11 Sep 4 04:58:08.761 + 5 11 5 5 126 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 23.1k 135k 11 Sep 4 04:58:10.760 + 6 12 6 6 133 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 24.5k 138k 11 Sep 4 04:58:12.760 + 7 14 7 7 190 1|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 34.1k 174k 11 Sep 4 04:58:14.761 +insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn time + 4 9 4 4 108 2|0 0.0% 0.0% 0 3.54G 146M 0|0 0|0 19.6k 123k 11 Sep 4 04:58:16.760 + 9 18 9 9 220 2|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 39.7k 195k 11 Sep 4 04:58:18.760 + 4 8 4 4 112 0|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 20.1k 125k 11 Sep 4 04:58:20.762 + 7 15 7 7 179 1|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 32.4k 169k 11 Sep 4 04:58:22.760 + 5 11 5 5 158 1|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 28.1k 155k 11 Sep 4 04:58:24.761 + 5 9 4 4 117 2|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 21.1k 128k 11 Sep 4 04:58:26.761 + 4 8 4 4 117 1|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 20.7k 127k 6 Sep 4 04:58:28.761 + *0 *0 *0 *0 0 0|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 98b 53.3k 6 Sep 4 04:58:30.762 + *0 *0 *0 *0 0 1|0 0.0% 0.0% 0 3.54G 147M 0|0 0|0 87b 51.0k 3 Sep 4 04:58:32.761 +``` + +## Explanation of mongostat Metrics + +- **insert** - Number of document insert operations per second. +- **query** - Number of query operations (reads) per second. +- **update** - Number of document update operations per second. +- **delete** - Number of delete operations per second. +- **getmore** - Number of getMore operations per second (used when fetching more results from a cursor). +- **command** - Number of database commands executed per second (e.g., createIndex, count, aggregate). + - command = number of regular commands | number of getLastError (GLE) commands +- **dirty/used** - Percentage of the WiredTiger cache that is dirty (not yet written to disk) and the percentage actively used. +- **flushes** - How many times data has been flushed to disk (per second). +- **vsize** - Virtual memory size of the mongod process. +- **res** - Resident memory size (actual RAM in use). +- **qrw arw** - Queued and active readers/writers: + - `qrw` = queued read | queued write. + - `arw` = active read | active write. +- **net_in/net_out** - Amount of network traffic coming into (net_in) and going out of (net_out) the database per second. +- **conn** - Number of active client connections. +- **time** - Timestamp of the sample. + +## Benchmark summary on Arm64 +Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**. + +| insert | query | update | delete | getmore | command | dirty | used | flushes | vsize | res | qrw | arw | net_in | net_out | conn | time | +|--------|-------|--------|--------|---------|---------|-------|------|---------|-------|------|------|------|--------|---------|------|----------------------| +| 50 | 0 | 0 | 0 | 0 | 7/0 | 0.0% | 0.0% | 0 | 3.53G | 141M | 0/0 | 0/0 | 10.9k | 57.8k | 10 | Sep 4 04:57:18.761 | +| 404 | 13 | 4 | 4 | 71 | 8/0 | 0.0% | 0.0% | 0 | 3.53G | 143M | 0/0 | 0/0 | 96.3k | 114k | 10 | Sep 4 04:57:20.761 | +| 7 | 14 | 7 | 7 | 108 | 2/0 | 0.0% | 0.0% | 0 | 3.53G | 143M | 0/0 | 0/0 | 21.8k | 118k | 10 | Sep 4 04:57:22.760 | +| 6 | 12 | 6 | 6 | 112 | 0/0 | 0.0% | 0.0% | 0 | 3.53G | 143M | 0/0 | 0/0 | 21.9k | 120k | 10 | Sep 4 04:57:24.760 | +| 8 | 16 | 8 | 8 | 136 | 1/0 | 0.0% | 0.0% | 0 | 3.53G | 144M | 0/0 | 0/0 | 27.1k | 137k | 10 | Sep 4 04:57:26.762 | +| 5 | 10 | 5 | 5 | 93 | 2/0 | 0.0% | 0.0% | 0 | 3.54G | 144M | 0/0 | 0/0 | 18.2k | 111k | 11 | Sep 4 04:57:28.760 | +| 7 | 15 | 7 | 7 | 135 | 0/0 | 0.0% | 0.0% | 0 | 3.54G | 144M | 0/0 | 0/0 | 26.5k | 139k | 11 | Sep 4 04:57:30.761 | +| 5 | 11 | 5 | 5 | 102 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 144M | 0/0 | 0/0 | 19.7k | 118k | 11 | Sep 4 04:57:32.761 | +| 7 | 16 | 10 | 7 | 138 | 2/0 | 0.0% | 0.0% | 0 | 3.54G | 145M | 0/0 | 0/0 | 27.0k | 143k | 11 | Sep 4 04:57:34.761 | +| 5 | 10 | 5 | 5 | 104 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 145M | 0/0 | 0/0 | 20.1k | 121k | 11 | Sep 4 04:57:36.761 | + +## Benchmark summary on x86_64: +Here is a summary of the benchmark results collected on x86_64 **D4s_v6 Ubuntu Pro 24.04 LTS virtual machine**. + +| insert | query | update | delete | getmore | command | dirty | used | flushes | vsize | res | qrw | arw | net_in | net_out | conn | time | +|--------|-------|--------|--------|---------|---------|-------|------|---------|-------|------|------|------|--------|---------|------|----------------------| +| 249 | 2 | 0 | 0 | 0 | 11/0 | 0.0% | 0.0% | 0 | 3.54G | 186M | 0/1 | 0/0 | 52.5k | 66.9k | 10 | Sep 4 05:52:36.629 | +| 208 | 18 | 8 | 8 | 120 | 5/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 64.8k | 134k | 10 | Sep 4 05:52:38.629 | +| 5 | 10 | 5 | 5 | 95 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 18.6k | 110k | 10 | Sep 4 05:52:40.629 | +| 8 | 17 | 8 | 8 | 152 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 30.0k | 144k | 10 | Sep 4 05:52:42.630 | +| 9 | 18 | 9 | 9 | 153 | 2/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 30.2k | 150k | 11 | Sep 4 05:52:46.629 | +| 8 | 17 | 8 | 8 | 161 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 31.3k | 158k | 11 | Sep 4 05:52:52.629 | +| 7 | 15 | 7 | 7 | 150 | 2/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 28.4k | 148k | 11 | Sep 4 05:52:56.628 | +| 8 | 17 | 8 | 8 | 170 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 32.6k | 164k | 11 | Sep 4 05:52:58.629 | +| 8 | 17 | 8 | 8 | 179 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 33.8k | 168k | 11 | Sep 4 05:53:02.631 | +| 9 | 18 | 9 | 9 | 193 | 1/0 | 0.0% | 0.0% | 0 | 3.54G | 190M | 0/0 | 0/0 | 35.8k | 177k | 11 | Sep 4 05:53:12.628 | + +### Highlights from Azure Ubuntu Pro 24.04 LTS Arm64 Benchmarking + +When comparing the results on Arm64 vs x86_64 virtual machines: + +- **Insert, Query, Update, Delete Rates:** Throughput remains consistent, with inserts and queries ranging from **5–50 ops/sec**, while updates and deletes generally track queries. A workload burst is observed with an **insert spike of 404**, highlighting MongoDB’s ability to handle sudden surges. +- **Memory Usage:** Resident memory remains stable at **141–145 MB**, with virtual memory steady at **3.53–3.54 GB**, confirming efficient memory allocation and stability. +- **Network Activity:** Network traffic scales proportionally with workload, with **net_in ranging ~18k–96k** and **net_out ~111k–143k**, showing balanced data flow. +- **Connections:** Active connections hold steady at **10–11**, indicating reliable support for concurrent client sessions without instability. +- **Command Execution & System Load:** Command executions (0–8) stay minimal, with dirty/used at **0.0%** and no flushes recorded, reflecting efficient internal resource handling. +- **Overall System Behavior:** MongoDB demonstrates stable throughput, predictable memory usage, and balanced network performance, while also showcasing resilience under workload bursts on Arm64. + + +You have now benchmarked MongoDB on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64. diff --git a/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongotop.md b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongotop.md new file mode 100644 index 0000000000..3886242050 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/mongodb-on-azure/mongotop.md @@ -0,0 +1,494 @@ +--- +title: Monitor MongoDB with mongotop +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Monitoring MongoDB Performance using Mongotop +This guide demonstrates how to monitor MongoDB performance using **mongotop**, showing **read/write** activity across collections in **real time**. It includes benchmark results collected on Azure Arm64 virtual machines, providing a reference for expected latencies. + +## Run mongotop — Terminal 2 + +```console +mongotop 2 +``` +**mongotop** shows how much time the server spends reading and writing each collection (refreshes every 2 seconds here). It helps you see which collections are busiest and whether reads or writes dominate. + +You should see an output similar to: +```output + ns total read write 2025-09-04T04:57:21Z + benchmarkDB.cursorTest 7ms 1ms 6ms + admin.atlascli 4ms 1ms 2ms +config.system_sessions_bench 3ms 1ms 2ms + benchmarkDB.testCollection 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + baselineDB.perf 0ms 0ms 0ms + baselineDB.testCollection 0ms 0ms 0ms + config.system.sessions 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:23Z + benchmarkDB.cursorTest 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 2ms +test.admin_system_version_test 4ms 1ms 2ms + test.atlascli 4ms 1ms 2ms + test.system_sessions_bench 4ms 1ms 2ms + local.system_replset_bench 3ms 1ms 2ms + admin.atlascli 2ms 0ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:25Z + admin.atlascli 4ms 1ms 2ms + config.system_sessions_bench 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 2ms + local.system_replset_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 2ms 0ms 1ms + benchmarkDB.testCollection 2ms 0ms 1ms +test.admin_system_version_test 2ms 0ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:27Z + benchmarkDB.cursorTest 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms +test.admin_system_version_test 4ms 1ms 2ms + test.atlascli 4ms 1ms 2ms + test.system_sessions_bench 4ms 1ms 2ms + admin.atlascli 2ms 0ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:29Z + admin.atlascli 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.system_sessions_bench 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 2ms + local.system_replset_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 3ms 1ms 2ms +test.admin_system_version_test 2ms 0ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:31Z +test.admin_system_version_test 4ms 2ms 2ms + test.atlascli 4ms 2ms 2ms + test.system_sessions_bench 4ms 2ms 2ms + benchmarkDB.cursorTest 3ms 1ms 1ms + admin.atlascli 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 0ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:33Z + admin.atlascli 4ms 2ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.system_sessions_bench 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 2ms + local.system_replset_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:35Z + benchmarkDB.cursorTest 4ms 1ms 2ms +test.admin_system_version_test 4ms 1ms 2ms + test.atlascli 4ms 1ms 2ms + test.system_sessions_bench 4ms 1ms 2ms + admin.atlascli 2ms 0ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:37Z + admin.atlascli 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.system_sessions_bench 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 2ms + local.system_replset_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 2ms 0ms 1ms +test.admin_system_version_test 2ms 0ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:39Z + benchmarkDB.cursorTest 4ms 2ms 2ms +test.admin_system_version_test 4ms 1ms 2ms + test.atlascli 4ms 1ms 2ms + test.system_sessions_bench 4ms 1ms 2ms + admin.atlascli 2ms 0ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:41Z + admin.atlascli 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.system_sessions_bench 4ms 2ms 2ms + config.transactions_bench 4ms 2ms 2ms + local.system_replset_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:43Z + benchmarkDB.cursorTest 5ms 2ms 2ms + test.system_sessions_bench 5ms 2ms 2ms +test.admin_system_version_test 4ms 2ms 2ms + test.atlascli 4ms 1ms 2ms + benchmarkDB.testCollection 3ms 2ms 1ms + admin.atlascli 2ms 1ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:45Z + admin.atlascli 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 2ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:47Z + benchmarkDB.cursorTest 5ms 2ms 2ms + benchmarkDB.testCollection 5ms 2ms 2ms + local.system_replset_bench 5ms 2ms 2ms +test.admin_system_version_test 5ms 2ms 3ms + test.atlascli 5ms 2ms 2ms + test.system_sessions_bench 4ms 2ms 2ms + admin.atlascli 2ms 1ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:49Z + admin.atlascli 5ms 1ms 3ms + config.system_sessions_bench 4ms 1ms 2ms + benchmarkDB.cursorTest 2ms 0ms 1ms + benchmarkDB.testCollection 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 0ms 1ms +test.admin_system_version_test 2ms 0ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 0ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:51Z + benchmarkDB.cursorTest 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.transactions_bench 4ms 1ms 3ms + local.system_replset_bench 4ms 1ms 2ms +test.admin_system_version_test 4ms 1ms 2ms + test.atlascli 4ms 1ms 2ms + test.system_sessions_bench 4ms 1ms 2ms + admin.atlascli 2ms 0ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:53Z + admin.atlascli 2ms 0ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 0ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 0ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:55Z + admin.atlascli 5ms 2ms 2ms + config.transactions_bench 5ms 2ms 2ms + test.system_sessions_bench 5ms 2ms 2ms + benchmarkDB.cursorTest 4ms 1ms 2ms + benchmarkDB.testCollection 4ms 1ms 2ms + config.system_sessions_bench 4ms 2ms 2ms + local.system_replset_bench 4ms 1ms 2ms +test.admin_system_version_test 4ms 2ms 2ms + test.atlascli 4ms 2ms 2ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:57Z + admin.atlascli 2ms 1ms 1ms + benchmarkDB.cursorTest 2ms 0ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + config.transactions_bench 2ms 0ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 0ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:57:59Z + admin.atlascli 5ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 2ms +test.admin_system_version_test 5ms 2ms 3ms + test.atlascli 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:01Z + admin.atlascli 2ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:03Z + admin.atlascli 5ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms +test.admin_system_version_test 5ms 2ms 3ms + test.atlascli 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:05Z + config.transactions_bench 3ms 1ms 1ms + admin.atlascli 2ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:07Z + admin.atlascli 5ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms + test.atlascli 5ms 1ms 3ms +test.admin_system_version_test 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:09Z +test.admin_system_version_test 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.atlascli 2ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:11Z + admin.atlascli 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:13Z + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms +test.admin_system_version_test 5ms 2ms 3ms + test.atlascli 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 3ms 1ms 1ms + admin.atlascli 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:15Z + admin.atlascli 3ms 1ms 1ms + config.transactions_bench 3ms 1ms 1ms +test.admin_system_version_test 3ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.system_sessions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:17Z + admin.atlascli 6ms 2ms 3ms + config.system_sessions_bench 6ms 2ms 3ms + config.transactions_bench 6ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms +test.admin_system_version_test 5ms 2ms 3ms + test.atlascli 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:19Z + admin.atlascli 3ms 1ms 1ms + benchmarkDB.cursorTest 3ms 1ms 1ms + benchmarkDB.testCollection 3ms 1ms 1ms + config.system_sessions_bench 3ms 1ms 1ms + config.transactions_bench 3ms 1ms 1ms + local.system_replset_bench 3ms 1ms 1ms +test.admin_system_version_test 3ms 1ms 1ms + test.atlascli 3ms 1ms 1ms + test.system_sessions_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:21Z + admin.atlascli 5ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + config.system_sessions_bench 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms + test.atlascli 5ms 1ms 3ms + test.system_sessions_bench 3ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:23Z +test.admin_system_version_test 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.atlascli 3ms 1ms 1ms + config.system_sessions_bench 3ms 1ms 1ms + test.atlascli 3ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:25Z + admin.atlascli 5ms 2ms 3ms + config.system_sessions_bench 4ms 1ms 3ms + test.system_sessions_bench 3ms 1ms 1ms + benchmarkDB.cursorTest 2ms 1ms 1ms + benchmarkDB.testCollection 2ms 1ms 1ms + config.transactions_bench 2ms 1ms 1ms + local.system_replset_bench 2ms 1ms 1ms +test.admin_system_version_test 2ms 1ms 1ms + test.atlascli 2ms 1ms 1ms + admin.system.version 0ms 0ms 0ms + + ns total read write 2025-09-04T04:58:27Z +test.admin_system_version_test 6ms 2ms 3ms + benchmarkDB.cursorTest 5ms 2ms 3ms + benchmarkDB.testCollection 5ms 2ms 3ms + config.transactions_bench 5ms 2ms 3ms + local.system_replset_bench 5ms 2ms 3ms + test.atlascli 5ms 2ms 3ms + test.system_sessions_bench 5ms 2ms 3ms + admin.atlascli 3ms 1ms 1ms + config.system_sessions_bench 3ms 2ms 1ms + admin.system.version 0ms 0ms 0ms +``` +## Explanation of Metrics and Namespaces + +**Metrics** + + - **ns (Namespace)** – Identifies the specific database and collection being measured. + - **total** – Total time spent on both read and write operations. + - **read** – Time taken by read operations like queries or fetches. + - **write** – Time taken by write operations like inserts, updates, or deletes. + - **timestamp** – Marks when the metric snapshot was captured. + +**Namespaces** + + - **benchmarkDB.testCollection** – Core benchmark collection with balanced read/write load. + - **admin.atlascli** – Tracks admin-level client activity. + - **benchmarkDB.cursorTest** – Measures cursor operations during benchmarking. + - **config.system_sessions_bench** – Benchmarks session handling in config DB. + - **config.transactions_bench** – Evaluates transaction performance in config DB. + - **local.system_replset_bench** – Tests replication set metadata access. + - **test.admin_system_version_test** – Monitors versioning metadata in test DB. + - **test.atlascli** – Simulates client-side workload in test DB. + - **test.system_sessions_bench** – Benchmarks session handling in test DB. + - **admin.system.version** – Static metadata collection with minimal activity. + +## Benchmark summary on Arm64 +For easier comparison, shown here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Azure Ubuntu Pro 24.04 LTS virtual machine**. + +| Namespace (ns) | Total Time Range | Read Time Range | Write Time Range | Notes | +| :------------------------------- | :--------------- | :-------------- | :--------------- | :------------------------------------------------------------ | +| **admin.atlascli** | 2–6 ms | 0–2 ms | 1–3 ms | Admin CLI operations. | +| **benchmarkDB.cursorTest** | 2–5 ms | 0–2 ms | 1–3 ms | Cursor benchmark load. | +| **benchmarkDB.testCollection** | 2–5 ms | 0–2 ms | 1–3 ms | Main benchmark workload. | +| **config.system_sessions_bench** | 2–6 ms | 0–2 ms | 1–3 ms | System/benchmark sessions. | +| **config.transactions_bench** | 2–6 ms | 0–2 ms | 1–3 ms | Internal transaction benchmark. | +| **local.system_replset_bench** | 2–5 ms | 0–2 ms | 1–3 ms | Local replica set benchmark. | +| **test.admin_system_version_test** | 2–5 ms | 0–2 ms | 1–3 ms | Version check workload. | +| **test.atlascli** | 2–5 ms | 0–2 ms | 1–3 ms | CLI/system background operations (test namespace). | +| **test.system_sessions_bench** | 2–5 ms | 0–2 ms | 1–3 ms | Session benchmark (test namespace). | +| **admin.system.version** | 0 ms | 0 ms | 0 ms | Appears inactive or instantaneous responses. | + +### Benchmark summary on x86_64: +Here is a summary of the benchmark results collected on x86_64 **D4s_v6 Ubuntu Pro 24.04 LTS virtual machine**. + +| Namespace (ns) | Total Time Range | Read Time Range | Write Time Range | Notes | +| :-------------------------------- | :--------------- | :-------------- | :--------------- | :--------------------------------------------------------------- | +| **admin.atlascli** | 1–5 ms | 0–3 ms | 0–2 ms | Admin CLI activity. | +| **benchmarkDB.cursorTest** | 1–3 ms | 0–1 ms | 0–1 ms | Cursor iteration benchmark workload. | +| **benchmarkDB.testCollection** | 1–4 ms | 0–2 ms | 0–2 ms | Main insert/query benchmark activity. | +| **config.system_sessions_bench** | 1–5 ms | 0–2 ms | 0–2 ms | Session handling benchmark. | +| **config.transactions_bench** | 1–4 ms | 0–2 ms | 0–2 ms | Transaction handling benchmark. | +| **local.system_replset_bench** | 1–4 ms | 0–2 ms | 0–2 ms | Local replica set performance test. | +| **test.admin_system_version_test**| 1–4 ms | 0–1 ms | 0–1 ms | Versioning metadata check. | +| **test.atlascli** | 1–4 ms | 0–1 ms | 0–2 ms | CLI/system background workload in test DB. | +| **test.system_sessions_bench** | 1–3 ms | 0–1 ms | 0–2 ms | Session simulation in test namespace. | +| **admin.system.version** | 0 ms | 0 ms | 0 ms | Always inactive/instantaneous response. | + + +### Highlights from Azure ubuntu Arm64 Benchmarking + +When comparing the results on Arm64 vs x86_64 virtual machines: + +- **Most active namespaces:** `admin.atlascli`, `benchmarkDB.testCollection`, `benchmarkDB.cursorTest`, and `test.atlascli` — total times **2–6 ms**. +- **Read patterns:** Reads across collections remain **0–2 ms**, showing consistently low-latency performance on Arm64. +- **Write patterns:** Writes are mostly **1–3 ms**, indicating stable and balanced write performance. +- **System-related namespaces:** `config.system_sessions_bench` and `config.transactions_bench` — total times **2–6 ms**, showing manageable system and transaction activity. +- **Idle collections:** `admin.system.version` remains at **0 ms**, confirming minimal or no activity. +- **Overall observation:** MongoDB operations on Arm64 are lightweight with **predictable, low-latency reads and writes**, confirming efficient performance on Azure Ubuntu Pro 24.04 LTS Arm64 Virtual machines.