Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein.
© 2022 Microsoft Corporation. All rights reserved.
Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.
Contents
- Predictive Maintenance for remote field devices hands-on lab step-by-step
- Abstract and learning objectives
- Overview
- Solution architecture
- Requirements
- Before the hands-on lab
- Exercise 1: Configuring IoT Central with devices and metadata
- Exercise 2: Run the Rod Pump Simulator
- Exercise 3: Creating a device group
- Exercise 4: Creating a useful dashboard
- Exercise 5: Create an Event Hub and continuously export data from IoT Central
- Exercise 6: Use Azure Databricks and Azure Machine Learning service to train and deploy predictive model
- Exercise 7: Create an Azure Function to predict pump failure
- Task 1: Create an Azure Function Application
- Task 2: Create a notification table in Azure Storage
- Task 3: Create a notification queue in Azure Storage
- Task 4: Create notification service in Microsoft Power Automate
- Task 5: Obtain connection settings for use with the Azure Function implementation
- Task 6: Create the local settings file for the Azure Functions project
- Task 7: Review the Azure Function code
- Task 8: Run the Function App locally
- Task 9: Prepare the Azure Function App with settings
- Task 10: Deploy the Function App into Azure
- After the hands-on lab
In this hands-on lab, you will build an end-to-end industrial IoT solution. We will begin by leveraging the Azure IoT Central SaaS offerings to stand up a fully functional remote monitoring solution quickly. Azure IoT Central provides solutions built upon recommendations found in the Azure IoT reference architecture. We will customize this system specifically for rod pumps. Rod pumps are standard industrial equipment found in the oil and gas industry. We will then establish a model for the telemetry data received from the pump systems in the field and use this model to deploy simulated devices for system testing purposes.
Furthermore, we will establish threshold rules in the remote monitoring system that will monitor the incoming telemetry data to ensure all equipment is running optimally and alert us whenever the equipment is running outside of normal boundaries. Anomalies indicate the need for alternative running parameters, maintenance, or a complete shutdown of the pump. By leveraging the IoT Central solution, users can also issue commands to the pumps from a remote location in an instant to automate many operational and maintenance tasks which used to require staff on-site. Automation lessens operating costs associated with technician dispatch and equipment damage due to a failure.
Above and beyond real-time monitoring and mitigating immediate equipment damage through commanding, you will also learn how to apply the historical telemetry data accumulated to identify positive and negative trends used to adjust daily operations for higher throughput and reliability.
The Predictive Maintenance for Remote Field Devices hands-on lab is an exercise that will challenge you to implement an end-to-end scenario using the supplied example based on Azure IoT Central and other related Azure services. It is beneficial to pair up with other members at the lab to model a real-world experience and allow each member to share their expertise for the overall solution rather than implementing the lab independently.
Azure IoT Central is at the core of the preferred solution. It is used for data ingest, device management, data storage, and reporting. IoT field devices securely connect to IoT Central through its cloud gateway. The continuous export component sends device telemetry data to Azure Blob storage for cold storage, and the same data to Azure Event Hubs for real-time processing. Azure Databricks uses the data stored in cold storage to periodically re-train a Machine Learning (ML) model to detect oil pump failures. It is also used to deploy the trained model to a web service hosted by Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using Azure Machine Learning. An Azure function is triggered by events flowing through Event Hubs. It sends the event data for each pump to the web service hosting the deployed model, then sends an alert through Microsoft Power Automate if an alert has not been sent within a configurable period of time. The alert is sent in the form of an email, identifying the failing oil pump with a suggestion to service the device.
Azure IoT Central architecture
The diagram above shows the components of IoT Central's architecture that pertain to Fabrikam's use case. Because it is a SaaS-based solution, the Azure services IoT Central uses are hidden from view. IoT field devices securely connect to IoT Central through its cloud gateway. The gateway uses Azure IoT Hub Device Provisioning Service (DPS) to streamline device management, and the underlying IoT Hub service facilitates bi-directional communication between the cloud and IoT devices. All device telemetry is stored in a time series data store, based on Azure Time Series Insights. The application data store persists the IoT Central application and its customizations. This application provides a user interface shell, through which Fabrikam's users manage devices and associated metadata, view dashboards and reports, and configure rules and actions to react to device telemetry that indicates possible rod pump failure. Pervasive throughout the end-to-end solution is security in transit and at rest for devices and the web-based management application. The continuous data export feature is used to enable hot and cold path workloads on real-time and batch device telemetry and metadata, using external services. Batch data is exported to Azure Blob storage in Apache Avro file format each minute, and real-time data is exported to either Azure Event Hubs or Azure Service Bus.
- Microsoft Azure subscription (non-Microsoft subscription, must be a pay-as-you subscription).
- .NET Core 3.1
- Visual Studio Code version 1.39 or greater
- C# Visual Studio Code Extension
- Azure Functions Core Tools version 2.x (using NPM or Chocolatey - see readme on GitHub repository)
- Azure Functions Visual Studio Code Extension
- An Azure Databricks cluster running Databricks Runtime 10.3 or above.
Refer to the Before the hands-on lab setup guide manual before continuing to the lab exercises.
Duration: 45 minutes
Azure IoT Central is a Software as a Service (SaaS) offering from Microsoft. The aim of this service is to provide a frictionless entry into Cloud Computing and IoT space. The core focus of many industrial companies is not on cloud computing; therefore, they do not necessarily have the personnel skilled to provide guidance and to stand up a reliable and scalable infrastructure for an IoT solution. It is imperative for these types of companies to enter the IoT space not only for the cost savings associated with remote monitoring, but also to improve safety for their workers and the environment.
Fabrikam is one such company that could use a helping hand entering the IoT space. They have recently invested in sensor technology on their rod pumps in the field, and they are ready to implement their cloud-based IoT Solution. Their IT department is small and unfamiliar with cloud-based IoT infrastructure; their IT budget also does not afford them the luxury of hiring a team of contractors to build out a solution for them.
The Fabrikam CIO has recently heard of Azure IoT Central. This online offering will streamline the process of them getting their sensor data to the cloud, where they can monitor their equipment for failures and improve their maintenance practices and not have to worry about the underlying infrastructure. A predictable cost model also ensures that there are no financial surprises.
The first task is to identify the data that the equipment will be sending to the cloud. This data will contain fields that represent the data read from the sensors at a specific instant in time. This data will be used in downstream systems to identify patterns that can lead to cost savings, increased safety, and more efficient work processes.
The telemetry being reported by the Fabrikam rod pumps are as follows, we will be using this information later in the lab:
Field | Type | Description |
---|---|---|
SerialNumber | String | Unique serial number identifying the rod pump equipment |
IPAddress | String | Current IP Address |
TimeStamp | DateTime | Timestamp in UTC identifying the point in time the telemetry was created |
PumpRate | Numeric | Speed calculated over the time duration between the last two times the crank arm has passed the proximity sensor measured in Strokes Per Minute (SPM) - minimum 0.0, maximum 100.0 |
TimePumpOn | Numeric | Number of minutes the pump has been on |
MotorPowerkW | Numeric | Measured in Kilowatts (kW) |
MotorSpeed | Numeric | including slip (RPM) |
CasingFriction | Numeric | Measured in PSI (psi) |
-
Navigate to the Azure portal.
-
Expand the left menu, and select + Create a resource, type in "IoT Central" in the search field, then select IoT Central application from the results.
-
On the IoT Central application resource overview screen, select Create.
-
In the IoT Central Application, fill out the form with the following settings:
Field Value Resource name Enter a globally unique name. Application URL Keep the default. Subscription Select the appropriate subscription. Resource group Select Fabrikam_Oil
.Pricing plan Select Standard 2
.Template Select Custom application
.Location Select the region nearest to you. -
Select Create.
-
Wait for the application to be provisioned.
-
In the Azure Portal, open the
Fabrikam_Oil
resource group. Select the IoT Central Application resource from the listing. -
From the IoT Central Application Overview screen, select the IoT Central Application URL. This opens the IoT Central application in a new tab in your browser.
-
We need to define the type of equipment we are using, and the data associated with the equipment. In order to do this, we must define a Device Template. Select the Device Templates menu item from the left-hand menu. Then select + New from the toolbar menu.
-
On the Select type screen, select IoT device as the custom device template type. Select the Next: Customize button.
-
On the Customize form, for the device template name, enter Rod Pump. Keep the
Gateway device
checkbox unchecked. Select the Next: Review button. -
On the
Review
screen, select the Create button. -
On the
Rod Pump
device template screen, select Custom model beneath theCreate a model
heading. -
On the Rod Pump
Model
screen, select theModel
parent item in the central navigation pane. Select Add capability to default component from beneath theCapabilities
heading. -
The
Capabilities
of the device model describe the telemetry expected from the device, the commands it responds to, and its properties (device twin properties). We'll begin defining theTelemetry
values, populate the capabilities as described in the following table. Use the + Add capability button at the bottom of the screen to add additional telemetry values. Select the Save button from the toolbar once complete. You will need to expand the capability form to edit the Schema, Unit, Display Unit and Description fields.Display Name Name Capability Type Semantic Type Schema Unit Display Unit Description Pump Rate PumpRate Telemetry None Double None SPM Speed calculated over the time duration between the last two times the crank arm has passed the proximity sensor measured in Strokes Per Minute (SPM) Time Pump On TimePumpOn Telemetry Time Span Double Minute Minutes Number of minutes the pump has been on Motor Power MotorPowerKw Telemetry Power Double None kW Measured in Kilowatts (kW) Motor Speed MotorSpeed Telemetry Angular Velocity Integer Revolution per minute RPM including slip (RPM) Casing Friction CasingFriction Telemetry None Double None PSI strokes per minute -
We also need to define the current state of the pump, whether it is running or not. Remaining on the same screen, Select + Add capability. Add the state with the display name of Power State, field name of PowerState, capability type of Telemetry and semantic type of State. For Value schema, select String then add the values Unavailable, On, and Off (in each the Display name, Name and Value columns). Select the Save button from the toolbar menu.
-
In the device template, Properties are metadata associated with the equipment. These are added on the same form as the capabilities, only using Property for the Capability type field. For our template, we will expect a property for Serial Number, IP Address, and the geographic location of the pump. Remaining on the same screen, select + Add capability. Define the device properties as follows, then select Save from the toolbar menu:
Display Name Name Capability Type Semantic Type Schema Writable Description Serial Number SerialNumber Property None String Off The Serial Number of the rod pump IP Address IPAddress Property None String Off The IP address of the rod pump Pump Location Location Property Location Geopoint Off The geo. location of the rod pump -
Operators and field workers will want to be able to turn on and off the pumps remotely. In order to do this, we will define a command. Remaining on the same screen, select + Add capability. Select the Commands tab and select the New button to add a new command. Create a command as follows, and select Save from the toolbar:
- Display Name - Toggle Motor Power
- Name - ToggleMotorPower
- Capability Type - Command
- Request - On
- (Request) Display name - Toggle
- (Request) Name - Toggle
- (Request) Schema - Boolean
- Description - Toggle the motor power of the pump on and off.
-
The
Rod Pump
capabilities should now look similar to the following (collapsed): -
Now, we can define device specific views to help us visualize telemetry and state of the Rod Pumps. We can also create forms to allow pump operators to execute commands on a device. From the center pane navigation menu of the Rod Pump device template, select the Views item. In the
Select to add a new view
screen, select the Visualizing the device card. -
A View is composed of one or more tiles that display information related to a specific device. In the Create view form, set the View name to Dashboard. Underneath Add a tile, select Start with devices. Then expand the Telemetry drop down and select each item individually, then choosing the Add tile button to add the chart to the View surface. Feel free to arrange the tiles as desired on the View design surface.
-
Remaining on the Dashboard view, expand the
Property
drop down list, and add a tile forIP Address
,Serial Number
, andPump Location
. Note that the tile forPump Location
renders with a map icon, meaning that IoT Central has identified the property as geography-based data, and will render it on a map appropriately. -
Spend some time now and investigate the various visualizations and settings you can set on each tile. For instance, you have the ability to customize chart types, colors, and axes. You can also resize each tile individually. Select the Save button in the toolbar menu to save the
Dashboard
view. -
On the
Rod Pump
device template screen, select theViews
item from the central navigation pane, and chooseVisualizing the device
once again to create a new view. Name this view,Command
and add a tile for theToggle Motor Power
command. Once complete, press theSave
button in the toolbar. This View will allow pump operators to initiate the toggle power command from the IoT Central application. -
Finally, we can add a thumbnail image to represent the equipment. Select Device templates, then select Rod Pump. Select the circle icon to left of the template name. This will allow you to select an image file. The image used in this lab can be found on PixaBay. After setting the thumbnail, select the
Publish
button in the device template toolbar. Finally, select Publish to publish the device template.
Under the hood, Azure IoT Central uses the Azure IoT Hub Device Provisioning Service (DPS). The aim of DPS is to provide a consistent way to connect devices to the Azure Cloud. Devices can utilize Shared Access Signatures, or X.509 certificates to securely connect to IoT Central.
Multiple options exist to register devices in IoT Central, ranging from individual device registration to bulk device registration via a comma delimited file. In this lab we will register a single device using SAS.
-
In the left-hand menu of your IoT Central application, select Devices. Select the Rod Pump template from the
Devices
blade and select the + New button from the toolbar to add a new device. -
A modal window will be displayed with an automatically generated Device ID and Device Name. You are able to overwrite these values with anything that makes sense in your downstream systems. We will be creating three real devices in this lab. Create the following as real devices, ensure
Simulate this device
remains toggled off:Device Name Device ID Rod Pump - DEVICE001 DEVICE001 Rod Pump - DEVICE002 DEVICE002 Rod Pump - DEVICE003 DEVICE003 -
On the Devices list, notice how all three real devices have the provisioning status of Registered.
Duration: 30 minutes
Included with this lab is source code that will simulate the connection and telemetry of three real pumps. In the previous exercise, we have defined them as DEVICE001, DEVICE002, and DEVICE003. The purpose of the simulator is to demonstrate real-world scenarios that include a normal healthy rod pump (DEVICE002), a gradually failing pump (DEVICE001), and an immediately failing pump (DEVICE003).
-
In IoT Central, select Devices from the left-hand menu. Then, from the devices list, select the link for Rod Pump - DEVICE001, and select Connect located in the toolbar.
-
A Device connection modal is displayed, make note of the ID Scope, Device ID, as well as the primary key value.
-
Repeat steps 1 and 2 for DEVICE002 and DEVICE003.
-
Using Visual Studio Code, open the
C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\Resources\FieldDeviceSimulator\Fabrikam.FieldDevice.Generator
folder.If you are prompted by Visual Studio code to install additional components to enable debugging, please select the option to install the components.
-
Open the appsettings.json file, and copy & paste the ID Scope and Device Primary Key values into the file.
-
Open Program.cs, go to line 144 and you will find the SetupDeviceRunTasks method. This method is responsible for creating the source code representations of the devices that we have defined earlier in the lab. Each device is initialized based on the values obtained from configuration (appsettings.json). Note that DEVICE001 is defined as the pump that will gradually fail, DEVICE002 as a healthy pump, and DEVICE003 as a pump that will fail immediately after a specific amount of time. Line 167 also adds an event handler that gets fired every time the Power State for a pump changes. The power state of a pump gets changed via a cloud to device command - we will be visiting this concept later on in this lab.
-
Open PumpDevice.cs, this class represents a device in the field. It encapsulates the properties (serial number and IP address) that are expected in the properties for the device in the cloud. It also maintains its own power state. Line 73 shows the RegisterAndConnectDeviceAsync method that is responsible for connecting to the global device provisioning endpoint to obtain the underlying IoT Hub connection information for the device to establish a connection to the IoT Central application (through the DeviceClient). Line 110 shows the SendDevicePropertiesAndInitialState method which updates the reported properties from the device to the cloud. This is also referred to as Device Twins. Line 154 shows the SendEvent method that sends the generated telemetry data to the cloud.
-
Within Visual Studio Code, expand the .vscode sub-folder, then open launch.json. Update the
console
setting to externalTerminal. This will cause the debugger to launch the console window into an external terminal instead of within Visual Studio Code. This is a required step since the internal terminal does not support entering values (ReadLine). -
Using Visual Studio Code, Debug the current project by pressing F5.
-
Once the menu is displayed, select option 1 to generate and send telemetry to IoT Central.
-
Allow the simulator to start sending telemetry data to IoT Central, you will see output similar to the following:
-
Allow the simulator to run while continuing with this lab.
-
After some time has passed, in IoT Central select the Devices item in the left-hand menu. Note that the provisioning status of DEVICE001, DEVICE002, and DEVICE003 now indicate Provisioned.
DEVICE001 is the rod pump that will gradually fail. Upon running the simulator for approximately 10 minutes (or 1100 messages), you can take a look at the Motor Power chart in the Device Dashboard, or on the measurements tab and watch the power consumption decrease.
-
In IoT Central, select the Devices menu item, then select the link for Rod Pump - DEVICE001 in the Devices list.
-
Ensure the Dashboard tab is selected and observe how the Motor Power usage of DEVICE001 gradually degrades. Note: You may not yet see the degradation at this stage. The motor power will gradually decrease after running for several minutes.
-
Repeat 1-3 and observe that DEVICE002, the non-failing pump, remains above the 30 kW threshold. DEVICE003 is also a failing pump but displays an immediate failure versus a gradual one.
After observing the failure of two of the rod pumps, you are able cycle the power state of the pump remotely. The simulator is setup to receive the Toggle Motor Power command from IoT Central and will update the state accordingly and start/stop sending telemetry to the cloud.
-
In IoT Central, select Devices from the left-hand menu, then select Rod Pump - DEVICE001 from the device list. Observe that even though the pump has in all purposes failed, that there is still power to the motor. In order to recover DEVICE001, select the Command tab. You will see the Toggle Motor Power command with a Toggle parameter. Set the Toggle parameter to
False
, then select the Run button on the command to turn the pump motor off. -
The simulator will also indicate that the command has been received from the cloud. Note in the output of the simulator, that DEVICE001 is no longer sending telemetry due to the pump motor being off.
-
After a few moments, return to the Dashboard tab of DEVICE001 in IoT Central. Note the receipt of telemetry has stopped, and the state indicates the motor power state is off.
-
Return to the Commands tab and toggle the motor power back on again by selecting
True
and pressing the Run button once more. On the Dashboard tab, you will see the Power State switch back to online, and telemetry to start flowing again. Due to the restart of the rod pump - it has now recovered, and telemetry is back into normal ranges!
Duration: 10 minutes
Device groups allow you to create logical groupings of IoT Devices in the field by the properties that defined them. In this case, we will want to create a device set that contains only the rod pumps located in the state of Texas.
In the field, all Texas pumps are located in the 192.168.1.*
subnet, so we will create a filter to include only those pumps in this device group.
-
In the left-hand menu, select the Device groups menu item. You will see a single default device group in the list (Rod Pump - All devices). Select the + New button in toolbar.
-
Set the name for this device group to
Texas Rod Pumps
. For the description, enterRod pumps located in Texas
. -
In the section Create a device query, in the Value column, select Rod Pump.
-
Select + Filter to add another filter.
-
Set up the filter with the following settings:
- Name: IP Address
- Operator: Contains
- Value:
192.168.1.
-
Select Save from the toolbar. Your results should be:
Field Value Device Group Name (in header) Texas Rod Pumps Description (in header) Rod pumps located in Texas Scope Device Template Rod Pump Filter: Property IP Address Filter: Operator contains Filter: Value 192.168.1.
Once the device group is saved, you are able to act upon this group of devices as a single unit within IoT Central.
Duration: 30 minutes
One of the main features of IoT Central is the ability to visualize the health of your IoT system at a glance. Creating a customized dashboard that best fits your business is instrumental in improving business processes. In this exercise, we will go over adding a main dashboard that will be displayed upon entry to the IoT Central application.
-
In the left-hand menu, select the Dashboard item. Then, select the Edit button.
-
Expand the ellipsis menu on each tile that you do not wish to see on the dashboard and select Delete to remove them. Some tiles are deleted by selecting the X button on the tile.
-
Remaining in the edit mode of the dashboard, select Image (static) from the Add a tile section of the menu. Select Add tile.
-
Once the tile is added to the design surface, expand the ellipsis menu on the tile, and select Configure. Configure the logo with the following file C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\media\fabrikam-logo.png. Select the Update button once complete.
-
Resize the logo on the dashboard using the handle on the lower right of the logo tile.
In the previous exercise, we created a device group that contains the devices located in Texas. We will leverage this device set to display this filtered information.
-
Remaining in the edit mode of the dashboard, select Start with devices from the Add a tile section of the menu.
-
Select Texas Rod Pumps from the Device group dropdown, then select each of the devices in the Devices list.
-
In the Property section, select the Serial Number, and IP Address properties. Add properties by selecting the + Property button. Once complete, select the Add tile button.
It is beneficial to see the location of certain critical Texas rod pumps. We will add a map that will display the location of each of the Texas Rod Pump devices.
-
Return to the Edit view of the Dashboard. Select Start with a visual from the Add a tile section of the menu. Select Map (property). Select Add tile.
-
Once the tile is added to the design surface, select the Edit (pencil icon) from the tile's toolbar. Configure the map with the following settings, then select the Update button to save the configuration changes.
Field Value Title Pump Location
Device Group Texas Rod Pumps Devices select each device Property Pump Location -
Observe how the device group now has a map displaying markers for each device in the set. Feel free to adjust to zoom to better infer their location.
-
Return to the edit view of the Dashboard and experiment adding additional visualizations relative to the Texas Rod Pump device group. For instance, add a Line chart tile that shows the Casing Friction data for all of the devices in a single chart.
Duration: 15 minutes
IoT Central provides a great first stepping stone into a larger IoT solution. A more mature IoT solution typically involves a machine learning model that will process incoming telemetry to logically determine if a failure of a pump is imminent. The first step into this implementation is to create an Event Hub to act as a destination for IoT Central's continuously exported data.
The Event Hub we will be creating will act as a collector for data coming into IoT Central. The receipt of a message into this hub will ultimately serve as a trigger to send data into a machine learning model to determine if a pump is in a failing state. We will also create a Consumer Group on the event hub to serve as an input to an Azure Function that will be created later on in this lab.
-
Log into the Azure Portal, and open your Fabrikam_Oil resource group.
-
On the top of the screen, select the Create button. When the marketplace screen displays, search for and select Event Hubs. This will allow you to create a new Event Hub Namespace resource. Select the Create button on the resource overview screen.
-
Configure the event hub as follows, select the Review + create* button, and then Create:
Field Value Subscription select the appropriate subscription Resource Group Fabrikam_Oil Name anything (must be globally unique) Location select the location nearest to you Pricing Tier Standard -
Once the Event Hubs namespace has been created, open it and select the + Event Hub button at the top of the screen.
-
In the Create Event Hub form, configure the hub as follows and select the Create button:
Field Value Name iot-central-feed Partition Count 2 Message Retention 1 Capture Off -
Once the Event Hub has been created, open it by selecting Event Hubs in the left-hand menu, and selecting the hub from the list.
-
From the top menu, select the + Consumer Group button to create a new consumer group for the hub. Name the consumer group ingressprocessing and select the Create button.
-
Return to the IoT Central application, from the left-hand menu, select Data export. Then select the + New button from the toolbar menu.
-
Begin configuring the data export with the following values:
Field Value Display Name (Header) Event Hub
FeedEnabled (Header) On Type of data to export Telemetry -
In the Destinations section, select the create a new one link. Configure the new destination as follows, then select the Create button:
Field Value Destination name iot-central-event-hub-feed
Destination type Azure Event Hubs Connection string see subsection below on how to get the connection string Event Hub iot-central-feed
Obtain the connection string as follows:
-
Navigate to your Event Hubs namespace in the Azure portal.
-
Select Shared access policies on the left-hand menu, then select the RootManageSharedAccessKey and copy the Connection string-primary key.
-
Return to IoT Central and paste the connection string into the Connection string field, then select the
iot-central-feed
event hub you created.
-
-
Select Save from the toolbar menu on the Event Hub Feed continuous export screen.
-
The Event Hub Feed export will be created, and then started (it may take a few minutes for the export to start). Return to the Data export list to see the current status of the feed.
Exercise 6: Use Azure Databricks and Azure Machine Learning service to train and deploy predictive model
Duration: 15 minutes *
Note: The steps to go through this exercise will take about 15 minutes. However, the processing time may be longer, depending on Databricks' processing.
In this exercise, we will use Azure Databricks to train a deep learning model for anomaly detection by training it to recognize normal operating conditions of a pump. We use three data sets for training: A set of telemetry generated by a pump operating under normal conditions, a pump that suffers from a gradual failure, and a pump that immediately fails.
After training the model, we validate it, then register the model in your Azure Machine Learning service workspace. Finally, we deploy the model in a web service hosted by Azure Container Instances for real-time scoring from the Azure function.
-
In the Azure portal, open your lab resource group, then open your Azure Databricks Service.
-
Select Launch Workspace. Azure Databricks will automatically sign you in through its Azure Active Directory integration.
-
In Azure Databricks, select Workspace, select Users, then select your username.
-
Select the
Anomaly Detection
notebook to open it. -
Before you can execute the cells in this notebook, you must first attach your Databricks cluster. Expand the dropdown at the top of the notebook where you see Detached. Select your lab cluster to attach it to the notebook. If it is not currently running, you will see an option to start the cluster.
-
Provide values in the
Cmd 56
cell. You can find your Machine Learning workspace information from the resource group details in the Azure portal. -
You may use keyboard shortcuts to execute the cells, such as Ctrl+Enter to execute a single cell, or Shift+Enter to execute a cell and move to the next one below.
Note: Cmd 58 will request you to authenticate your device. Pay close attention to the URL and the code for authentication.
Cmd 60 may take upwards of 10 minutes to create the container registry. It took about 30 minutes when we updated this in February 2022.
-
Copy the scoring web service URL from the last cell's result after executing it. You will use this value to update a setting in your Azure function in the next exercise to let it know where the model is deployed.
Duration: 45 minutes
We will be using an Azure Function to read incoming telemetry from IoT Hub and send it to the HTTP endpoint of our predictive maintenance model. The function will receive a 0 or 1 from the model indicating whether or not a pump should be maintained to avoid possible failure. A notification will also be initiated through Flow to notify Field Workers of the maintenance required.
-
Return to the Azure Portal.
-
Open your resource group for this lab.
-
From the top menu, select the + Create button, and search for
Function App
. Select the Function App app. Then, select Create. -
Configure the Function App with the following settings, then select Next: Hosting >:
Field Value App Name your choice, must be globally unique Subscription select the appropriate subscription Resource Group use existing, and select Fabrikam_Oil Publish select Code Runtime Stack select .Net Version select 3.1 Location select the location nearest to you -
Configure the Hosting options as follows, then select Review + create:
Field Value Storage Account retain the default value of create new Operating System Windows Plan Type Consumption (Serverless) -
On the Review blade, select Create, then wait until the Function App is created before continuing.
One of the things we would like to avoid is sending repeated notifications to the workforce in the field. Notifications should only be sent once every 24-hour period per device. To keep track of when a notification was last sent for a device, we will use a table in a Storage Account.
-
In the Azure portal, select Resource groups from the left-hand menu, then select the Fabrikam_Oil link from the listing.
-
Select the link for the storage account that was created with the Function App in Task 1. The name will start with "storageaccount".
-
From the Storage Account left-hand menu, select Tables from the Data storage section, then select the + Table button, and create a new table named DeviceNotifications.
-
Keep the Storage Account open in your browser for the next task.
There are many ways to trigger flows in Microsoft Power Automate. One of them is monitoring an Azure Queue. We will use a Queue in our Azure Storage Account to host this queue.
-
From the Storage Account left-hand menu, select Queues located beneath the Data storage section, then select the + Queue button, and create a new queue named flownotificationqueue.
-
Navigate to the storage account. Obtain the Shared Storage Key for the storage account by selecting the left-hand menu item Access keys. The keys are hidden to start. Select Show keys to see the keys. Copy the Key value of key1 and retain this value. We will be using it later in the next task. Also, retain the name of your Storage Account.
We will be using Microsoft Power Automate as a means to email notifications to the workforce in the field. This flow will respond to new messages placed on the queue that we created in Task 3.
-
Access Microsoft Power Automate and sign in (create an account if you don't already have one).
-
From the left-hand menu, select + Create, then choose Instant cloud flow.
-
When the dialog displays, select the Skip link at the bottom to dismiss it.
-
From the search bar, type queue to filter connectors and triggers. Then, select the When there are messages in a queue (V2)(preview) item from the filtered list of Triggers.
-
Fill out the form as follows, then select the Create button:
Field Value Authentication Type Access Key Storage Account Name enter the generated storage account name Shared Storage Key paste the Key value recorded in Task 3 -
In the queue step, for the Storage account name field, select the Use connection settings (storageaccount{SUFFIX}) item. For the Queue Name field, select flownotificationqueue. Select the + New step button.
-
In the search box for the next step, search for email, then select the Send an email notification item from the filtered list of Actions.
-
In the Send an email notification form, fill it out as follows, then select the + New Step button.
Field Value Subject Action Required: Pump needs maintenance Body put cursor in the field, then select Message Text from the Dynamic Content menu -
In the search bar for the next step, search for queue once more, then select the Delete message (V2) item from the filtered list of Actions.
-
In the Delete message form, fill it out as follows, then select the Save button.
Field Value Queue Name flownotificationqueue Message ID put cursor in the field, then select Message ID from the Dynamic Content menu Pop Receipt put cursor in the field, then select Pop Receipt from the Dynamic Content menu -
Microsoft Power Automate will automatically name the Flow. You are able to edit this Flow in the future by selecting My flows from the left-hand menu.
-
Once the Function App has been provisioned, open the Fabrikam_Oil resource group and select the link for the storage account that was created with the Function App in Task 1. The name will start with "storageaccount".
-
From the left-hand menu, select Access Keys and copy the key 1 Connection String, keep this value handy as we'll be needing it in the next task.
-
Return to the Fabrikam_Oil resource group and select the link for the Event Hubs Namespace.
-
With the Event Hubs Namespace resource open, in the left-hand menu select the Shared access policies item located in Settings, then select the RootManageSharedAccessKey policy.
-
A blade will open where you will be able to copy the Primary Connection string. Keep this value handy as we'll be needing it in the next task.
It is recommended that you never check in secrets, such as connection strings, into source control. One way to do this is to use settings files. The values stored in these files mimic environment values used in production. The local settings file is never checked into source control.
-
Using Visual Studio Code, open the
C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\Resources\FailurePredictionFunction
folder. -
Upon opening the folder in Visual Studio Code, you may be prompted to restore unresolved dependencies. If this is the case, select the Restore button.
-
In this folder, create a new file named local.settings.json and populate it with the values obtained in the previous task as follows, then save the file (note: prediction model endpoint was obtained in Exercise 6, Task 1 - step 8):
{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "<paste storage account connection string>", "FUNCTIONS_WORKER_RUNTIME": "dotnet", "fabrikam-oil_RootManageSharedAccessKey_EVENTHUB": "<paste event hub namespace connection string>", "PredictionModelEndpoint": "<paste prediction model endpoint>" } }
Note: If you need to get the prediction model endpoint again, launch your Databricks workspace. Open the Anomaly Detection notebook. Scroll down to the bottom to get the PredictionModelEndpoint value.
-
Observe the static Run function located in PumpFailurePrediction.cs. It identifies that the function will run every time the iot-central-feed event hub receives a message. It also uses a specific Consumer Group. These groups are used when there is a possibility that more than one process may be utilizing the data received in the hub. This ensures that dependent processes receive all messages once - thus avoiding any contingency problems. The method receives an array (batch) of events from the hub on each execution, as well as an instance of a logger in case you wish to see values in the console (locally or in the cloud).
public static async Task Run([EventHubTrigger("iot-central-feed", Connection = "fabrikam-oil_RootManageSharedAccessKey_EVENTHUB", ConsumerGroup = "ingressprocessing")] EventData[] events, ILogger log)
-
On lines 29 - 36, the message body received from the event is deserialized into a Telemetry object. The Telemetry class matches the telemetry sent by the pumps. The Telemetry class can be found in the
Models/Telemetry.cs
file. -
From lines 48 - 60, we group the telemetry by Device ID and calculate the averages for each sensor reading. This helps us reduce the number of calls we send to the scoring service that contains our deployed prediction model.
-
Lines 66 - 75 send the received telemetry to the scoring service endpoint. This service will respond with a 1 - meaning the pump requires maintenance, or a 0 meaning no maintenance notifications should be sent.
-
Lines 81 - 109 check Table storage to ensure a notification for the specific device hasn't been sent in the last 24 hours. If a notification is due to be sent, it will update the table storage record with the current timestamp and send a notification by queueing a message onto the flownotificationqueue queue.
We will run the function app and the generator to add values to the queues, so that they can be processed by Power Automate.
Note: You should also be running the Generator, as done in Exercise 2, Task 3. If this isn't running, you will not get values in your queues.
-
Select Ctrl+F5 to run the Azure Function code.
-
After some time, you should see log statements indicating that a message has been queued (indicating that Microsoft Power Automate will send a notification email).
-
Once a message has been placed on the flownotificationqueue, it will trigger the notification flow that we created and send an email to the field workers. These emails are sent in 5-minute intervals.
Note: If you are not seeing any failures, you can go back into the IoT Central portal and run the Command for one of the devices. This can trigger the failure.
-
You can now exit the locally running functions by selecting the Terminal window by pressing the Ctrl+c keys.
-
In Task 6, we created a local settings file to hold environment variables that are used in our function code. We need to mirror these values in the Azure Function App as well. In the Azure portal, access the Fabrikam_Oil resource group, and open the pumpfunctions Function Application.
-
Select the Configuration option in the left-hand menu.
-
In the Application Settings section, we will add the following application settings to mimic those that are in our local.settings.json file. Add a new setting by selecting the New application setting button.
Setting Value fabrikam-oil_RootManageSharedAccessKey_EVENTHUB event hub shared access key value from the local.settings.json file PredictionModelEndpoint prediction model endpoint value from local.settings.json file -
Once complete, select the Save button from the top menu to commit the changes to the application configuration.
-
Now that we have been able to successfully run our Functions locally, we are ready to deploy them to the cloud. The first step to deployment is to ensure that you are logged in to your Azure Account. To log into your Azure account, select the following shortcut to display the command palette: Ctrl+Shift+p.
-
In the textbox of the command palette, type in Azure:Sign In, and select enter (or select the command from the list). This will open a Microsoft Authentication webpage in your default browser. Logging into this window will authenticate Visual Studio Code with your ID.
-
Once authenticated, we are ready to deploy - once again select Ctrl+Shift+p to open the command palette. Type
Azure Functions: Deploy
and select the Azure Functions: Deploy to Function App command from the list. -
The first step of this command is to identify where we are deploying the function to. In our case, we have already created a Function App to house our function called pumpfunctions. Select this value from the list of available choices.
-
You may be prompted if you want to deploy to pumpfunctions, select the Deploy button in this dialog.
-
After some time, a notification window will display indicating the deployment has completed.
-
Returning to the Azure Portal, in the Fabrikam_Oil resource group, open the pumpfunctions function app and observe that our function that we created in Visual Studio Code has been deployed.
Duration: 10 minutes
-
In the Azure portal, select Resource Groups. Open the resource group that you created in Exercise 6, and select the Delete resource group button.
-
Delete Microsoft Power Automate flow that we created. Access Microsoft Power Automate and login. From the left-hand menu, select My flows. Select the ellipsis button next to the flow we created in this lab and select Delete.
You should follow all steps provided after attending the Hands-on lab.