Skip to content
This repository has been archived by the owner on Jul 2, 2022. It is now read-only.

Latest commit

 

History

History
906 lines (539 loc) · 76.7 KB

HOL step-by step - Predictive Maintenance for remote field devices.md

File metadata and controls

906 lines (539 loc) · 76.7 KB

Microsoft Cloud Workshops

Predictive Maintenance for remote field devices
Hands-on lab step-by-step
February 2022

Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

The names of manufacturers, products, or URLs are provided for informational purposes only and Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission received from any linked site. Microsoft is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement of Microsoft of the site or the products contained therein.

© 2022 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.

Contents

Predictive Maintenance for remote field devices hands-on lab step-by-step

Abstract and learning objectives

In this hands-on lab, you will build an end-to-end industrial IoT solution. We will begin by leveraging the Azure IoT Central SaaS offerings to stand up a fully functional remote monitoring solution quickly. Azure IoT Central provides solutions built upon recommendations found in the Azure IoT reference architecture. We will customize this system specifically for rod pumps. Rod pumps are standard industrial equipment found in the oil and gas industry. We will then establish a model for the telemetry data received from the pump systems in the field and use this model to deploy simulated devices for system testing purposes.

Furthermore, we will establish threshold rules in the remote monitoring system that will monitor the incoming telemetry data to ensure all equipment is running optimally and alert us whenever the equipment is running outside of normal boundaries. Anomalies indicate the need for alternative running parameters, maintenance, or a complete shutdown of the pump. By leveraging the IoT Central solution, users can also issue commands to the pumps from a remote location in an instant to automate many operational and maintenance tasks which used to require staff on-site. Automation lessens operating costs associated with technician dispatch and equipment damage due to a failure.

Above and beyond real-time monitoring and mitigating immediate equipment damage through commanding, you will also learn how to apply the historical telemetry data accumulated to identify positive and negative trends used to adjust daily operations for higher throughput and reliability.

Overview

The Predictive Maintenance for Remote Field Devices hands-on lab is an exercise that will challenge you to implement an end-to-end scenario using the supplied example based on Azure IoT Central and other related Azure services. It is beneficial to pair up with other members at the lab to model a real-world experience and allow each member to share their expertise for the overall solution rather than implementing the lab independently.

Solution architecture

The architecture diagram shows the components of the preferred solution.

Azure IoT Central is at the core of the preferred solution. It is used for data ingest, device management, data storage, and reporting. IoT field devices securely connect to IoT Central through its cloud gateway. The continuous export component sends device telemetry data to Azure Blob storage for cold storage, and the same data to Azure Event Hubs for real-time processing. Azure Databricks uses the data stored in cold storage to periodically re-train a Machine Learning (ML) model to detect oil pump failures. It is also used to deploy the trained model to a web service hosted by Azure Kubernetes Service (AKS) or Azure Container Instances (ACI), using Azure Machine Learning. An Azure function is triggered by events flowing through Event Hubs. It sends the event data for each pump to the web service hosting the deployed model, then sends an alert through Microsoft Power Automate if an alert has not been sent within a configurable period of time. The alert is sent in the form of an email, identifying the failing oil pump with a suggestion to service the device.

Azure IoT Central architecture

The architecture diagram illustrates the components of Azure IoT Central.

The diagram above shows the components of IoT Central's architecture that pertain to Fabrikam's use case. Because it is a SaaS-based solution, the Azure services IoT Central uses are hidden from view. IoT field devices securely connect to IoT Central through its cloud gateway. The gateway uses Azure IoT Hub Device Provisioning Service (DPS) to streamline device management, and the underlying IoT Hub service facilitates bi-directional communication between the cloud and IoT devices. All device telemetry is stored in a time series data store, based on Azure Time Series Insights. The application data store persists the IoT Central application and its customizations. This application provides a user interface shell, through which Fabrikam's users manage devices and associated metadata, view dashboards and reports, and configure rules and actions to react to device telemetry that indicates possible rod pump failure. Pervasive throughout the end-to-end solution is security in transit and at rest for devices and the web-based management application. The continuous data export feature is used to enable hot and cold path workloads on real-time and batch device telemetry and metadata, using external services. Batch data is exported to Azure Blob storage in Apache Avro file format each minute, and real-time data is exported to either Azure Event Hubs or Azure Service Bus.

Requirements

  1. Microsoft Azure subscription (non-Microsoft subscription, must be a pay-as-you subscription).
  2. .NET Core 3.1
  3. Visual Studio Code version 1.39 or greater
  4. C# Visual Studio Code Extension
  5. Azure Functions Core Tools version 2.x (using NPM or Chocolatey - see readme on GitHub repository)
  6. Azure Functions Visual Studio Code Extension
  7. An Azure Databricks cluster running Databricks Runtime 10.3 or above.

Before the hands-on lab

Refer to the Before the hands-on lab setup guide manual before continuing to the lab exercises.

Exercise 1: Configuring IoT Central with devices and metadata

Duration: 45 minutes

Azure IoT Central is a Software as a Service (SaaS) offering from Microsoft. The aim of this service is to provide a frictionless entry into Cloud Computing and IoT space. The core focus of many industrial companies is not on cloud computing; therefore, they do not necessarily have the personnel skilled to provide guidance and to stand up a reliable and scalable infrastructure for an IoT solution. It is imperative for these types of companies to enter the IoT space not only for the cost savings associated with remote monitoring, but also to improve safety for their workers and the environment.

Fabrikam is one such company that could use a helping hand entering the IoT space. They have recently invested in sensor technology on their rod pumps in the field, and they are ready to implement their cloud-based IoT Solution. Their IT department is small and unfamiliar with cloud-based IoT infrastructure; their IT budget also does not afford them the luxury of hiring a team of contractors to build out a solution for them.

The Fabrikam CIO has recently heard of Azure IoT Central. This online offering will streamline the process of them getting their sensor data to the cloud, where they can monitor their equipment for failures and improve their maintenance practices and not have to worry about the underlying infrastructure. A predictable cost model also ensures that there are no financial surprises.

Task 1: Model the telemetry data

The first task is to identify the data that the equipment will be sending to the cloud. This data will contain fields that represent the data read from the sensors at a specific instant in time. This data will be used in downstream systems to identify patterns that can lead to cost savings, increased safety, and more efficient work processes.

The telemetry being reported by the Fabrikam rod pumps are as follows, we will be using this information later in the lab:

Telemetry schema

Field Type Description
SerialNumber String Unique serial number identifying the rod pump equipment
IPAddress String Current IP Address
TimeStamp DateTime Timestamp in UTC identifying the point in time the telemetry was created
PumpRate Numeric Speed calculated over the time duration between the last two times the crank arm has passed the proximity sensor measured in Strokes Per Minute (SPM) - minimum 0.0, maximum 100.0
TimePumpOn Numeric Number of minutes the pump has been on
MotorPowerkW Numeric Measured in Kilowatts (kW)
MotorSpeed Numeric including slip (RPM)
CasingFriction Numeric Measured in PSI (psi)

Task 2: Create an IoT Central application

  1. Navigate to the Azure portal.

  2. Expand the left menu, and select + Create a resource, type in "IoT Central" in the search field, then select IoT Central application from the results.

    The browser screen shows the Create a resource screen. IoT Central is entered in the search box. IoT Central application is highlighted in the suggested search results.

  3. On the IoT Central application resource overview screen, select Create.

    The browser screen shows the IoT Central application resource screen. The Create button is highlighted.

  4. In the IoT Central Application, fill out the form with the following settings:

    Field Value
    Resource name Enter a globally unique name.
    Application URL Keep the default.
    Subscription Select the appropriate subscription.
    Resource group Select Fabrikam_Oil.
    Pricing plan Select Standard 2.
    Template Select Custom application.
    Location Select the region nearest to you.

    The IoT Central Application form is shown populated with the preceding values.

  5. Select Create.

  6. Wait for the application to be provisioned.

Task 3: Create the Device Template

  1. In the Azure Portal, open the Fabrikam_Oil resource group. Select the IoT Central Application resource from the listing.

    The Fabrikam_Oil resource group listing is shown with the IoT Central Application highlighted.

  2. From the IoT Central Application Overview screen, select the IoT Central Application URL. This opens the IoT Central application in a new tab in your browser.

    The IoT Central Application Overview screen is shown with the IoT Central Application URL highlighted.

  3. We need to define the type of equipment we are using, and the data associated with the equipment. In order to do this, we must define a Device Template. Select the Device Templates menu item from the left-hand menu. Then select + New from the toolbar menu.

    The Device templates button is selected from the left menu and the + New button is highlighted from the toolbar.

  4. On the Select type screen, select IoT device as the custom device template type. Select the Next: Customize button.

    The IoT device card is selected on the Select type screen.

  5. On the Customize form, for the device template name, enter Rod Pump. Keep the Gateway device checkbox unchecked. Select the Next: Review button.

    The Customize form is displayed populated as described.

  6. On the Review screen, select the Create button.

  7. On the Rod Pump device template screen, select Custom model beneath the Create a model heading.

    The Rod Pump device template screen is displayed with the Custom model item selected.

  8. On the Rod Pump Model screen, select the Model parent item in the central navigation pane. Select Add capability to default component from beneath the Capabilities heading.

    The Model screen is displayed with the Add capability to default component card selected.

  9. The Capabilities of the device model describe the telemetry expected from the device, the commands it responds to, and its properties (device twin properties). We'll begin defining the Telemetry values, populate the capabilities as described in the following table. Use the + Add capability button at the bottom of the screen to add additional telemetry values. Select the Save button from the toolbar once complete. You will need to expand the capability form to edit the Schema, Unit, Display Unit and Description fields.

    Display Name Name Capability Type Semantic Type Schema Unit Display Unit Description
    Pump Rate PumpRate Telemetry None Double None SPM Speed calculated over the time duration between the last two times the crank arm has passed the proximity sensor measured in Strokes Per Minute (SPM)
    Time Pump On TimePumpOn Telemetry Time Span Double Minute Minutes Number of minutes the pump has been on
    Motor Power MotorPowerKw Telemetry Power Double None kW Measured in Kilowatts (kW)
    Motor Speed MotorSpeed Telemetry Angular Velocity Integer Revolution per minute RPM including slip (RPM)
    Casing Friction CasingFriction Telemetry None Double None PSI strokes per minute

    The Rod Pump model Capabilities form is shown populated with the aforementioned Telemetry values.

  10. We also need to define the current state of the pump, whether it is running or not. Remaining on the same screen, Select + Add capability. Add the state with the display name of Power State, field name of PowerState, capability type of Telemetry and semantic type of State. For Value schema, select String then add the values Unavailable, On, and Off (in each the Display name, Name and Value columns). Select the Save button from the toolbar menu.

    The screen displays the state configuration options.

  11. In the device template, Properties are metadata associated with the equipment. These are added on the same form as the capabilities, only using Property for the Capability type field. For our template, we will expect a property for Serial Number, IP Address, and the geographic location of the pump. Remaining on the same screen, select + Add capability. Define the device properties as follows, then select Save from the toolbar menu:

    Display Name Name Capability Type Semantic Type Schema Writable Description
    Serial Number SerialNumber Property None String Off The Serial Number of the rod pump
    IP Address IPAddress Property None String Off The IP address of the rod pump
    Pump Location Location Property Location Geopoint Off The geo. location of the rod pump

    The screen displays the device properties.

  12. Operators and field workers will want to be able to turn on and off the pumps remotely. In order to do this, we will define a command. Remaining on the same screen, select + Add capability. Select the Commands tab and select the New button to add a new command. Create a command as follows, and select Save from the toolbar:

    • Display Name - Toggle Motor Power
    • Name - ToggleMotorPower
    • Capability Type - Command
    • Request - On
    • (Request) Display name - Toggle
    • (Request) Name - Toggle
    • (Request) Schema - Boolean
    • Description - Toggle the motor power of the pump on and off.

    The command form fields are displayed and populated as described.

  13. The Rod Pump capabilities should now look similar to the following (collapsed):

    A summary (collapsed) view of the Rod Pump model capabilities is displayed.

  14. Now, we can define device specific views to help us visualize telemetry and state of the Rod Pumps. We can also create forms to allow pump operators to execute commands on a device. From the center pane navigation menu of the Rod Pump device template, select the Views item. In the Select to add a new view screen, select the Visualizing the device card.

    The Device templates Views screen is shown with the Visualizing the device card highlighted.

  15. A View is composed of one or more tiles that display information related to a specific device. In the Create view form, set the View name to Dashboard. Underneath Add a tile, select Start with devices. Then expand the Telemetry drop down and select each item individually, then choosing the Add tile button to add the chart to the View surface. Feel free to arrange the tiles as desired on the View design surface.

    The device template Dashboard create view screen is shown, with an item selected in the Telemetry section, and the Add tile button highlighted.

  16. Remaining on the Dashboard view, expand the Property drop down list, and add a tile for IP Address, Serial Number, and Pump Location. Note that the tile for Pump Location renders with a map icon, meaning that IoT Central has identified the property as geography-based data, and will render it on a map appropriately.

    A portion of the Dashboard View design surface is shown with tiles added for various properties of the device.

  17. Spend some time now and investigate the various visualizations and settings you can set on each tile. For instance, you have the ability to customize chart types, colors, and axes. You can also resize each tile individually. Select the Save button in the toolbar menu to save the Dashboard view.

  18. On the Rod Pump device template screen, select the Views item from the central navigation pane, and choose Visualizing the device once again to create a new view. Name this view, Command and add a tile for the Toggle Motor Power command. Once complete, press the Save button in the toolbar. This View will allow pump operators to initiate the toggle power command from the IoT Central application.

  19. Finally, we can add a thumbnail image to represent the equipment. Select Device templates, then select Rod Pump. Select the circle icon to left of the template name. This will allow you to select an image file. The image used in this lab can be found on PixaBay. After setting the thumbnail, select the Publish button in the device template toolbar. Finally, select Publish to publish the device template.

    The header portion of the Rod Pump device template is shown with the thumbnail and Publish menu item highlighted.

Task 4: Create and provision real devices

Under the hood, Azure IoT Central uses the Azure IoT Hub Device Provisioning Service (DPS). The aim of DPS is to provide a consistent way to connect devices to the Azure Cloud. Devices can utilize Shared Access Signatures, or X.509 certificates to securely connect to IoT Central.

Multiple options exist to register devices in IoT Central, ranging from individual device registration to bulk device registration via a comma delimited file. In this lab we will register a single device using SAS.

  1. In the left-hand menu of your IoT Central application, select Devices. Select the Rod Pump template from the Devices blade and select the + New button from the toolbar to add a new device.

    The Devices menu item is selected, with Rod Pump selected from the Devices blade. The + New button is highlighted in the toolbar.

  2. A modal window will be displayed with an automatically generated Device ID and Device Name. You are able to overwrite these values with anything that makes sense in your downstream systems. We will be creating three real devices in this lab. Create the following as real devices, ensure Simulate this device remains toggled off:

    The create new device modal is displayed populated with the details of DEVICE001.

    Device Name Device ID
    Rod Pump - DEVICE001 DEVICE001
    Rod Pump - DEVICE002 DEVICE002
    Rod Pump - DEVICE003 DEVICE003
  3. On the Devices list, notice how all three real devices have the provisioning status of Registered.

    All of the devices are listed and their associated provisioning status. The devices show as registered.

Exercise 2: Run the Rod Pump Simulator

Duration: 30 minutes

Included with this lab is source code that will simulate the connection and telemetry of three real pumps. In the previous exercise, we have defined them as DEVICE001, DEVICE002, and DEVICE003. The purpose of the simulator is to demonstrate real-world scenarios that include a normal healthy rod pump (DEVICE002), a gradually failing pump (DEVICE001), and an immediately failing pump (DEVICE003).

Task 1: Record device connection information

  1. In IoT Central, select Devices from the left-hand menu. Then, from the devices list, select the link for Rod Pump - DEVICE001, and select Connect located in the toolbar.

    The device screen for DEVICE001 is displayed with the Connect button highlighted.

  2. A Device connection modal is displayed, make note of the ID Scope, Device ID, as well as the primary key value.

    The device connection key information is displayed.

  3. Repeat steps 1 and 2 for DEVICE002 and DEVICE003.

Task 2: Open the Visual Studio solution, and update connection string values

  1. Using Visual Studio Code, open the C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\Resources\FieldDeviceSimulator\Fabrikam.FieldDevice.Generator folder.

    If you are prompted by Visual Studio code to install additional components to enable debugging, please select the option to install the components.

  2. Open the appsettings.json file, and copy & paste the ID Scope and Device Primary Key values into the file.

    The screenshot shows the applications settings that include the key values for all three devices.

  3. Open Program.cs, go to line 144 and you will find the SetupDeviceRunTasks method. This method is responsible for creating the source code representations of the devices that we have defined earlier in the lab. Each device is initialized based on the values obtained from configuration (appsettings.json). Note that DEVICE001 is defined as the pump that will gradually fail, DEVICE002 as a healthy pump, and DEVICE003 as a pump that will fail immediately after a specific amount of time. Line 167 also adds an event handler that gets fired every time the Power State for a pump changes. The power state of a pump gets changed via a cloud to device command - we will be visiting this concept later on in this lab.

  4. Open PumpDevice.cs, this class represents a device in the field. It encapsulates the properties (serial number and IP address) that are expected in the properties for the device in the cloud. It also maintains its own power state. Line 73 shows the RegisterAndConnectDeviceAsync method that is responsible for connecting to the global device provisioning endpoint to obtain the underlying IoT Hub connection information for the device to establish a connection to the IoT Central application (through the DeviceClient). Line 110 shows the SendDevicePropertiesAndInitialState method which updates the reported properties from the device to the cloud. This is also referred to as Device Twins. Line 154 shows the SendEvent method that sends the generated telemetry data to the cloud.

Task 3: Run the application

  1. Within Visual Studio Code, expand the .vscode sub-folder, then open launch.json. Update the console setting to externalTerminal. This will cause the debugger to launch the console window into an external terminal instead of within Visual Studio Code. This is a required step since the internal terminal does not support entering values (ReadLine).

    The console setting is highlighted in launch.json.

  2. Using Visual Studio Code, Debug the current project by pressing F5.

  3. Once the menu is displayed, select option 1 to generate and send telemetry to IoT Central.

    The command prompt displays the choice to generate and send telemetry to IoT Central.

  4. Allow the simulator to start sending telemetry data to IoT Central, you will see output similar to the following:

    A command prompt shows a few provisioned devices and their corresponding measurements.

  5. Allow the simulator to run while continuing with this lab.

  6. After some time has passed, in IoT Central select the Devices item in the left-hand menu. Note that the provisioning status of DEVICE001, DEVICE002, and DEVICE003 now indicate Provisioned.

    A list of provisioned devices is displayed.

Task 4: Interpret telemetry data

DEVICE001 is the rod pump that will gradually fail. Upon running the simulator for approximately 10 minutes (or 1100 messages), you can take a look at the Motor Power chart in the Device Dashboard, or on the measurements tab and watch the power consumption decrease.

  1. In IoT Central, select the Devices menu item, then select the link for Rod Pump - DEVICE001 in the Devices list.

  2. Ensure the Dashboard tab is selected and observe how the Motor Power usage of DEVICE001 gradually degrades. Note: You may not yet see the degradation at this stage. The motor power will gradually decrease after running for several minutes.

    The average Motor Power measurements are displayed in a graph. The graph shows Motor Power usage degradation.

  3. Repeat 1-3 and observe that DEVICE002, the non-failing pump, remains above the 30 kW threshold. DEVICE003 is also a failing pump but displays an immediate failure versus a gradual one.

    Regular range motor power is displayed in a graph. Power measurements show steady.

    Low motor measurements are being displayed in a graph. There is a steep power drop off around 3:52 PM.

Task 5: Restart a failing pump remotely

After observing the failure of two of the rod pumps, you are able cycle the power state of the pump remotely. The simulator is setup to receive the Toggle Motor Power command from IoT Central and will update the state accordingly and start/stop sending telemetry to the cloud.

  1. In IoT Central, select Devices from the left-hand menu, then select Rod Pump - DEVICE001 from the device list. Observe that even though the pump has in all purposes failed, that there is still power to the motor. In order to recover DEVICE001, select the Command tab. You will see the Toggle Motor Power command with a Toggle parameter. Set the Toggle parameter to False, then select the Run button on the command to turn the pump motor off.

    Device Toggle Motor Power command options are displayed. The Run button is highlighted.

  2. The simulator will also indicate that the command has been received from the cloud. Note in the output of the simulator, that DEVICE001 is no longer sending telemetry due to the pump motor being off.

    Command prompt displays several messages. Simulator showing DEVICE001 received the cloud message. Device power cycle is displayed.

  3. After a few moments, return to the Dashboard tab of DEVICE001 in IoT Central. Note the receipt of telemetry has stopped, and the state indicates the motor power state is off.

    Device DEVICE001 is in a Power State Off with no telemetry coming in.

  4. Return to the Commands tab and toggle the motor power back on again by selecting True and pressing the Run button once more. On the Dashboard tab, you will see the Power State switch back to online, and telemetry to start flowing again. Due to the restart of the rod pump - it has now recovered, and telemetry is back into normal ranges!

    A graph of device telemetry measurements is displayed. DEVICE001 recovered after Pump Power State has been cycled.

Exercise 3: Creating a device group

Duration: 10 minutes

Device groups allow you to create logical groupings of IoT Devices in the field by the properties that defined them. In this case, we will want to create a device set that contains only the rod pumps located in the state of Texas.

Task 1: Create a device group using a filter

In the field, all Texas pumps are located in the 192.168.1.* subnet, so we will create a filter to include only those pumps in this device group.

  1. In the left-hand menu, select the Device groups menu item. You will see a single default device group in the list (Rod Pump - All devices). Select the + New button in toolbar.

    A screen displays the current device groups. The + New button is highlighted.

  2. Set the name for this device group to Texas Rod Pumps. For the description, enter Rod pumps located in Texas.

  3. In the section Create a device query, in the Value column, select Rod Pump.

  4. Select + Filter to add another filter.

  5. Set up the filter with the following settings:

    • Name: IP Address
    • Operator: Contains
    • Value: 192.168.1.

    The screen shows the device group query. There is a list of devices and their settings displayed in a results table.

  6. Select Save from the toolbar. Your results should be:

    Field Value
    Device Group Name (in header) Texas Rod Pumps
    Description (in header) Rod pumps located in Texas
    Scope Device Template Rod Pump
    Filter: Property IP Address
    Filter: Operator contains
    Filter: Value 192.168.1.

Once the device group is saved, you are able to act upon this group of devices as a single unit within IoT Central.

Exercise 4: Creating a useful dashboard

Duration: 30 minutes

One of the main features of IoT Central is the ability to visualize the health of your IoT system at a glance. Creating a customized dashboard that best fits your business is instrumental in improving business processes. In this exercise, we will go over adding a main dashboard that will be displayed upon entry to the IoT Central application.

Task 1: Clearing out the default dashboard

  1. In the left-hand menu, select the Dashboard item. Then, select the Edit button.

    The dashboard editing options are presented to the user.

  2. Expand the ellipsis menu on each tile that you do not wish to see on the dashboard and select Delete to remove them. Some tiles are deleted by selecting the X button on the tile.

    The ellipsis menu on a tile is expanded with the Delete menu item selected.

Task 2: Add your company logo

  1. Remaining in the edit mode of the dashboard, select Image (static) from the Add a tile section of the menu. Select Add tile.

    The Custom tiles items are displayed. The Image item is selected.

  2. Once the tile is added to the design surface, expand the ellipsis menu on the tile, and select Configure. Configure the logo with the following file C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\media\fabrikam-logo.png. Select the Update button once complete.

    The company logo configuration options are displayed.

  3. Resize the logo on the dashboard using the handle on the lower right of the logo tile.

    The company logo is displayed and option to resize tool is selected.

Task 3: Add a list of Texas Rod Pumps

In the previous exercise, we created a device group that contains the devices located in Texas. We will leverage this device set to display this filtered information.

  1. Remaining in the edit mode of the dashboard, select Start with devices from the Add a tile section of the menu.

  2. Select Texas Rod Pumps from the Device group dropdown, then select each of the devices in the Devices list.

  3. In the Property section, select the Serial Number, and IP Address properties. Add properties by selecting the + Property button. Once complete, select the Add tile button.

    The blade for adding a tile is displayed.

    The dashboard shows company logo and the device IDs and IP addresses.

Task 4: Add a map displaying the locations of the Texas Rod Pumps

It is beneficial to see the location of certain critical Texas rod pumps. We will add a map that will display the location of each of the Texas Rod Pump devices.

  1. Return to the Edit view of the Dashboard. Select Start with a visual from the Add a tile section of the menu. Select Map (property). Select Add tile.

    The Rod Pump location is displayed in a map. A list of devices and their IP addresses are listed on the right-hand side.

  2. Once the tile is added to the design surface, select the Edit (pencil icon) from the tile's toolbar. Configure the map with the following settings, then select the Update button to save the configuration changes.

    Field Value
    Title Pump Location
    Device Group Texas Rod Pumps
    Devices select each device
    Property Pump Location

    The Rod Pump location is displayed in a map. A list of devices and their IP addresses are listed on the right-hand side.

  3. Observe how the device group now has a map displaying markers for each device in the set. Feel free to adjust to zoom to better infer their location.

    The Rod Pump location is displayed in a map. A list of devices and their IP addresses are listed on the right-hand side.

  4. Return to the edit view of the Dashboard and experiment adding additional visualizations relative to the Texas Rod Pump device group. For instance, add a Line chart tile that shows the Casing Friction data for all of the devices in a single chart.

    A Line chart tile has been added to the dashboard showing the Casing Friction for every device in the Texas Rod Pump device group.

Exercise 5: Create an Event Hub and continuously export data from IoT Central

Duration: 15 minutes

IoT Central provides a great first stepping stone into a larger IoT solution. A more mature IoT solution typically involves a machine learning model that will process incoming telemetry to logically determine if a failure of a pump is imminent. The first step into this implementation is to create an Event Hub to act as a destination for IoT Central's continuously exported data.

Task 1: Create an Event Hub

The Event Hub we will be creating will act as a collector for data coming into IoT Central. The receipt of a message into this hub will ultimately serve as a trigger to send data into a machine learning model to determine if a pump is in a failing state. We will also create a Consumer Group on the event hub to serve as an input to an Azure Function that will be created later on in this lab.

  1. Log into the Azure Portal, and open your Fabrikam_Oil resource group.

  2. On the top of the screen, select the Create button. When the marketplace screen displays, search for and select Event Hubs. This will allow you to create a new Event Hub Namespace resource. Select the Create button on the resource overview screen.

    The marketplace search box contains Event Hubs, and the Event Hubs suggestion is highlighted.

  3. Configure the event hub as follows, select the Review + create* button, and then Create:

    Field Value
    Subscription select the appropriate subscription
    Resource Group Fabrikam_Oil
    Name anything (must be globally unique)
    Location select the location nearest to you
    Pricing Tier Standard

    The creating event hub namespace options are displayed.

  4. Once the Event Hubs namespace has been created, open it and select the + Event Hub button at the top of the screen.

    The Event Hub namespace screen is displayed. The create event hub button is circled.

  5. In the Create Event Hub form, configure the hub as follows and select the Create button:

    Field Value
    Name iot-central-feed
    Partition Count 2
    Message Retention 1
    Capture Off

    The event hub creation screen is displaying the configuration options.  The create button is circled.

  6. Once the Event Hub has been created, open it by selecting Event Hubs in the left-hand menu, and selecting the hub from the list.

    The event hubs are listed. iot-central-feed is highlighted.

  7. From the top menu, select the + Consumer Group button to create a new consumer group for the hub. Name the consumer group ingressprocessing and select the Create button.

    The event hub screen shows the ability to create a consumer group. The name is entered and the create button is circled.

Task 2: Configure continuous data export from IoT Central

  1. Return to the IoT Central application, from the left-hand menu, select Data export. Then select the + New button from the toolbar menu.

    On the left-hand menu has the Data Export link highlighted along with the + New export button on the toolbar.

  2. Begin configuring the data export with the following values:

    Field Value
    Display Name (Header) Event Hub Feed
    Enabled (Header) On
    Type of data to export Telemetry
  3. In the Destinations section, select the create a new one link. Configure the new destination as follows, then select the Create button:

    Field Value
    Destination name iot-central-event-hub-feed
    Destination type Azure Event Hubs
    Connection string see subsection below on how to get the connection string
    Event Hub iot-central-feed

    Obtain the connection string as follows:

    • Navigate to your Event Hubs namespace in the Azure portal.

    • Select Shared access policies on the left-hand menu, then select the RootManageSharedAccessKey and copy the Connection string-primary key.

      The Event Hubs connection string is highlighted.

    • Return to IoT Central and paste the connection string into the Connection string field, then select the iot-central-feed event hub you created.

    The Create destination modal is shown populated.

    The data export configuration fields are displayed.

  4. Select Save from the toolbar menu on the Event Hub Feed continuous export screen.

  5. The Event Hub Feed export will be created, and then started (it may take a few minutes for the export to start). Return to the Data export list to see the current status of the feed.

    The Data Export screen displays the status of Event Hub creation. The status of healthy is displayed.

Exercise 6: Use Azure Databricks and Azure Machine Learning service to train and deploy predictive model

Duration: 15 minutes *

Note: The steps to go through this exercise will take about 15 minutes. However, the processing time may be longer, depending on Databricks' processing.

In this exercise, we will use Azure Databricks to train a deep learning model for anomaly detection by training it to recognize normal operating conditions of a pump. We use three data sets for training: A set of telemetry generated by a pump operating under normal conditions, a pump that suffers from a gradual failure, and a pump that immediately fails.

After training the model, we validate it, then register the model in your Azure Machine Learning service workspace. Finally, we deploy the model in a web service hosted by Azure Container Instances for real-time scoring from the Azure function.

Task 1: Run the Anomaly Detection notebook

  1. In the Azure portal, open your lab resource group, then open your Azure Databricks Service.

  2. Select Launch Workspace. Azure Databricks will automatically sign you in through its Azure Active Directory integration.

    Launch Workspace option is displayed.

  3. In Azure Databricks, select Workspace, select Users, then select your username.

  4. Select the Anomaly Detection notebook to open it.

    The Anomaly Detection notebook is highlighted.

  5. Before you can execute the cells in this notebook, you must first attach your Databricks cluster. Expand the dropdown at the top of the notebook where you see Detached. Select your lab cluster to attach it to the notebook. If it is not currently running, you will see an option to start the cluster.

    The screenshot displays the lab cluster selected for attaching to the notebook. The lab configuration values are displayed.

  6. Provide values in the Cmd 56 cell. You can find your Machine Learning workspace information from the resource group details in the Azure portal.

    The screenshot displays the resources in our resource group, highlighting the Machine learning resource.

  7. You may use keyboard shortcuts to execute the cells, such as Ctrl+Enter to execute a single cell, or Shift+Enter to execute a cell and move to the next one below.

    Note: Cmd 58 will request you to authenticate your device. Pay close attention to the URL and the code for authentication.

    Cmd 60 may take upwards of 10 minutes to create the container registry. It took about 30 minutes when we updated this in February 2022.

  8. Copy the scoring web service URL from the last cell's result after executing it. You will use this value to update a setting in your Azure function in the next exercise to let it know where the model is deployed.

Exercise 7: Create an Azure Function to predict pump failure

Duration: 45 minutes

We will be using an Azure Function to read incoming telemetry from IoT Hub and send it to the HTTP endpoint of our predictive maintenance model. The function will receive a 0 or 1 from the model indicating whether or not a pump should be maintained to avoid possible failure. A notification will also be initiated through Flow to notify Field Workers of the maintenance required.

Task 1: Create an Azure Function Application

  1. Return to the Azure Portal.

  2. Open your resource group for this lab.

  3. From the top menu, select the + Create button, and search for Function App. Select the Function App app. Then, select Create.

    The Marketplace is displayed. The search input shows "Function App". The Function App image and title are highlighted.

  4. Configure the Function App with the following settings, then select Next: Hosting >:

    Field Value
    App Name your choice, must be globally unique
    Subscription select the appropriate subscription
    Resource Group use existing, and select Fabrikam_Oil
    Publish select Code
    Runtime Stack select .Net
    Version select 3.1
    Location select the location nearest to you

    The create Function App blade is displayed. All of the Function App configurations are displayed.  The create button is circled.

  5. Configure the Hosting options as follows, then select Review + create:

    Field Value
    Storage Account retain the default value of create new
    Operating System Windows
    Plan Type Consumption (Serverless)

    The create Function App blade is displayed. All of the Function App configurations are displayed.  The create button is circled.

  6. On the Review blade, select Create, then wait until the Function App is created before continuing.

Task 2: Create a notification table in Azure Storage

One of the things we would like to avoid is sending repeated notifications to the workforce in the field. Notifications should only be sent once every 24-hour period per device. To keep track of when a notification was last sent for a device, we will use a table in a Storage Account.

  1. In the Azure portal, select Resource groups from the left-hand menu, then select the Fabrikam_Oil link from the listing.

  2. Select the link for the storage account that was created with the Function App in Task 1. The name will start with "storageaccount".

    The resources are listed in the blade. The storage account is circled.

  3. From the Storage Account left-hand menu, select Tables from the Data storage section, then select the + Table button, and create a new table named DeviceNotifications.

    The storage account table administration blade is displayed. The tables link is circled on the left-hand side and the Add Table button is circled in the header.

  4. Keep the Storage Account open in your browser for the next task.

Task 3: Create a notification queue in Azure Storage

There are many ways to trigger flows in Microsoft Power Automate. One of them is monitoring an Azure Queue. We will use a Queue in our Azure Storage Account to host this queue.

  1. From the Storage Account left-hand menu, select Queues located beneath the Data storage section, then select the + Queue button, and create a new queue named flownotificationqueue.

    The queues administration blade is displayed. The Add Queue button is circled.

  2. Navigate to the storage account. Obtain the Shared Storage Key for the storage account by selecting the left-hand menu item Access keys. The keys are hidden to start. Select Show keys to see the keys. Copy the Key value of key1 and retain this value. We will be using it later in the next task. Also, retain the name of your Storage Account.

    The storage account access key information is displayed. Show keys is circled.

Task 4: Create notification service in Microsoft Power Automate

We will be using Microsoft Power Automate as a means to email notifications to the workforce in the field. This flow will respond to new messages placed on the queue that we created in Task 3.

  1. Access Microsoft Power Automate and sign in (create an account if you don't already have one).

  2. From the left-hand menu, select + Create, then choose Instant cloud flow.

    Options for creating a flow are displayed. The Instant cloud flow item is highlighted.

  3. When the dialog displays, select the Skip link at the bottom to dismiss it.

    An informational dialog is displayed. The skip button is highlighted.

  4. From the search bar, type queue to filter connectors and triggers. Then, select the When there are messages in a queue (V2)(preview) item from the filtered list of Triggers.

    The trigger dialog is displayed. Create a trigger to respond to a message in the Azure Queue is selected.

  5. Fill out the form as follows, then select the Create button:

    Field Value
    Authentication Type Access Key
    Storage Account Name enter the generated storage account name
    Shared Storage Key paste the Key value recorded in Task 3

    The queue connection configuration fields are displayed. The create button is circled.

  6. In the queue step, for the Storage account name field, select the Use connection settings (storageaccount{SUFFIX}) item. For the Queue Name field, select flownotificationqueue. Select the + New step button.

    Message queue step dialog is displayed. The New step button is circled.

  7. In the search box for the next step, search for email, then select the Send an email notification item from the filtered list of Actions.

    A list of available actions displayed. The Send an email action is circled.

  8. In the Send an email notification form, fill it out as follows, then select the + New Step button.

    Field Value
    Subject Action Required: Pump needs maintenance
    Body put cursor in the field, then select Message Text from the Dynamic Content menu

    The Flow action for sending an email is displayed. The message text option and New step button are circled.

  9. In the search bar for the next step, search for queue once more, then select the Delete message (V2) item from the filtered list of Actions.

    The Flow Action pane is displayed. The queue text is listed in the search field.

  10. In the Delete message form, fill it out as follows, then select the Save button.

    Field Value
    Queue Name flownotificationqueue
    Message ID put cursor in the field, then select Message ID from the Dynamic Content menu
    Pop Receipt put cursor in the field, then select Pop Receipt from the Dynamic Content menu

    The Flow dialog displays the Delete message step. Message ID and Pop Receipt fields are populated. The Save button is circled. The available dynamic content pane has the available fields listed.

  11. Microsoft Power Automate will automatically name the Flow. You are able to edit this Flow in the future by selecting My flows from the left-hand menu.

    The Azure Flow blade is displayed. In the left pane, the My flows link is circled. The Send an email step is circled.

Task 5: Obtain connection settings for use with the Azure Function implementation

  1. Once the Function App has been provisioned, open the Fabrikam_Oil resource group and select the link for the storage account that was created with the Function App in Task 1. The name will start with "storageaccount".

    The resources are listed in the blade. The storage account is circled.

  2. From the left-hand menu, select Access Keys and copy the key 1 Connection String, keep this value handy as we'll be needing it in the next task.

    The pump function access keys are displayed. Key 1 is circled. The copy button for Connection string value is circled.

  3. Return to the Fabrikam_Oil resource group and select the link for the Event Hubs Namespace.

    A screen displays Azure resources. Event hub is circled.

  4. With the Event Hubs Namespace resource open, in the left-hand menu select the Shared access policies item located in Settings, then select the RootManageSharedAccessKey policy.

    The root connection string screen is displayed. Shared access policies and the RootManageSharedAccessKey link are circled.

  5. A blade will open where you will be able to copy the Primary Connection string. Keep this value handy as we'll be needing it in the next task.

    SAS Policy blade is displayed. Sample primary connection is displayed.  Copy button is circled.

Task 6: Create the local settings file for the Azure Functions project

It is recommended that you never check in secrets, such as connection strings, into source control. One way to do this is to use settings files. The values stored in these files mimic environment values used in production. The local settings file is never checked into source control.

  1. Using Visual Studio Code, open the C:\MCW-Predictive-Maintenance-for-remote-field-devices-main\Hands-on lab\Resources\FailurePredictionFunction folder.

  2. Upon opening the folder in Visual Studio Code, you may be prompted to restore unresolved dependencies. If this is the case, select the Restore button.

    A sample Restore Unresolved Dependencies dialog is displayed. The Restore button is available.

  3. In this folder, create a new file named local.settings.json and populate it with the values obtained in the previous task as follows, then save the file (note: prediction model endpoint was obtained in Exercise 6, Task 1 - step 8):

    {
       "IsEncrypted": false,
       "Values": {
         "AzureWebJobsStorage": "<paste storage account connection string>",
         "FUNCTIONS_WORKER_RUNTIME": "dotnet",
         "fabrikam-oil_RootManageSharedAccessKey_EVENTHUB": "<paste event hub namespace connection string>",
         "PredictionModelEndpoint": "<paste prediction model endpoint>"
       }
    }

    Note: If you need to get the prediction model endpoint again, launch your Databricks workspace. Open the Anomaly Detection notebook. Scroll down to the bottom to get the PredictionModelEndpoint value.

Task 7: Review the Azure Function code

  1. Observe the static Run function located in PumpFailurePrediction.cs. It identifies that the function will run every time the iot-central-feed event hub receives a message. It also uses a specific Consumer Group. These groups are used when there is a possibility that more than one process may be utilizing the data received in the hub. This ensures that dependent processes receive all messages once - thus avoiding any contingency problems. The method receives an array (batch) of events from the hub on each execution, as well as an instance of a logger in case you wish to see values in the console (locally or in the cloud).

    public static async Task Run([EventHubTrigger("iot-central-feed", Connection = "fabrikam-oil_RootManageSharedAccessKey_EVENTHUB",
                                 ConsumerGroup = "ingressprocessing")] EventData[] events, ILogger log)
  2. On lines 29 - 36, the message body received from the event is deserialized into a Telemetry object. The Telemetry class matches the telemetry sent by the pumps. The Telemetry class can be found in the Models/Telemetry.cs file.

  3. From lines 48 - 60, we group the telemetry by Device ID and calculate the averages for each sensor reading. This helps us reduce the number of calls we send to the scoring service that contains our deployed prediction model.

  4. Lines 66 - 75 send the received telemetry to the scoring service endpoint. This service will respond with a 1 - meaning the pump requires maintenance, or a 0 meaning no maintenance notifications should be sent.

  5. Lines 81 - 109 check Table storage to ensure a notification for the specific device hasn't been sent in the last 24 hours. If a notification is due to be sent, it will update the table storage record with the current timestamp and send a notification by queueing a message onto the flownotificationqueue queue.

Task 8: Run the Function App locally

We will run the function app and the generator to add values to the queues, so that they can be processed by Power Automate.

Note: You should also be running the Generator, as done in Exercise 2, Task 3. If this isn't running, you will not get values in your queues.

  1. Select Ctrl+F5 to run the Azure Function code.

  2. After some time, you should see log statements indicating that a message has been queued (indicating that Microsoft Power Automate will send a notification email).

    A sample function log is displayed. A notification of email has been sent is circled.

  3. Once a message has been placed on the flownotificationqueue, it will trigger the notification flow that we created and send an email to the field workers. These emails are sent in 5-minute intervals.

    Note: If you are not seeing any failures, you can go back into the IoT Central portal and run the Command for one of the devices. This can trigger the failure.

    A sample Microsoft Power Automate email message is displayed.

  4. You can now exit the locally running functions by selecting the Terminal window by pressing the Ctrl+c keys.

Task 9: Prepare the Azure Function App with settings

  1. In Task 6, we created a local settings file to hold environment variables that are used in our function code. We need to mirror these values in the Azure Function App as well. In the Azure portal, access the Fabrikam_Oil resource group, and open the pumpfunctions Function Application.

  2. Select the Configuration option in the left-hand menu.

    The Azure Function Application Overview window is displayed with the Configuration item highlighted.

  3. In the Application Settings section, we will add the following application settings to mimic those that are in our local.settings.json file. Add a new setting by selecting the New application setting button.

    Setting Value
    fabrikam-oil_RootManageSharedAccessKey_EVENTHUB event hub shared access key value from the local.settings.json file
    PredictionModelEndpoint prediction model endpoint value from local.settings.json file

    The Application Settings tab is selected and the new application setting button is highlighted.

    The form to add an application setting is displayed, consisting of a name and value textbox.

  4. Once complete, select the Save button from the top menu to commit the changes to the application configuration.

    The Application Settings tab is selected and the Save button is highlighted.

Task 10: Deploy the Function App into Azure

  1. Now that we have been able to successfully run our Functions locally, we are ready to deploy them to the cloud. The first step to deployment is to ensure that you are logged in to your Azure Account. To log into your Azure account, select the following shortcut to display the command palette: Ctrl+Shift+p.

  2. In the textbox of the command palette, type in Azure:Sign In, and select enter (or select the command from the list). This will open a Microsoft Authentication webpage in your default browser. Logging into this window will authenticate Visual Studio Code with your ID.

    The Visual Studio Code command palette is displaying Azure:Sign In as a command option. The Azure:Sign In command is highlighted.

  3. Once authenticated, we are ready to deploy - once again select Ctrl+Shift+p to open the command palette. Type Azure Functions: Deploy and select the Azure Functions: Deploy to Function App command from the list.

    The Visual Studio Code command palette is displaying Azure Functions. The Deploy to Function App command is circled.

  4. The first step of this command is to identify where we are deploying the function to. In our case, we have already created a Function App to house our function called pumpfunctions. Select this value from the list of available choices.

    The Visual Studio Code command palette is displaying potential destinations for deployment of our function, the pumpfunctions function app is highlighted

  5. You may be prompted if you want to deploy to pumpfunctions, select the Deploy button in this dialog.

    A dialog asking if we are sure we want to deploy to pump functions is displayed. The Deploy button is highlighted.

  6. After some time, a notification window will display indicating the deployment has completed.

    A notification is shown indicating the deployment to pumpfunctions has completed.

  7. Returning to the Azure Portal, in the Fabrikam_Oil resource group, open the pumpfunctions function app and observe that our function that we created in Visual Studio Code has been deployed.

    The Azure Portal Function Application overview window is displayed with the pumpfunctions application expanded. The functions node is also expanded and the function named PumpFailurePrediction is highlighted.

After the hands-on lab

Duration: 10 minutes

Task 1: Delete Lab Resources

  1. In the Azure portal, select Resource Groups. Open the resource group that you created in Exercise 6, and select the Delete resource group button.

    The Azure Resource Group panel is displayed. The Delete resource group link is circled.

  2. Delete Microsoft Power Automate flow that we created. Access Microsoft Power Automate and login. From the left-hand menu, select My flows. Select the ellipsis button next to the flow we created in this lab and select Delete.

    The Power Automate Flows panel is displayed. The ellipsis and delete links are circled.

You should follow all steps provided after attending the Hands-on lab.