Skip to content

4b Enterprise Architecture

Andrew Welch edited this page Dec 28, 2021 · 11 revisions

This section is a work-in-progress, entirely new for the forthcoming Power Platform Adoption Framework, Third Edition. Please stick with us and share your own ideas as we continue to build it out.

Overview of Enterprise Architecture

Enterprise management and governance of the platform cannot be effective if we've not architected the platform for scale within the organization.

How do we set up Power Platform environments within the tenant, and use that architecture in order to support other pillars such as ALM and maturing our security model? How do we license and authenticate users? Which re-usable components shall we supply to developers (and how shall we supply them)? Are there other data sources in the organization's overall data ecosystem (e.g. data warehouse, Azure data services, etc.) for which we need to account and possible integrate? Here we’re primarily concerned with how the platform fits within the larger IT picture, from both a technical and governance perspective. The dimensions for this pillar are:

  • Authentication to Power Platform using Azure Active Directory (AAD) and Microsoft 365
  • Environments architected in accordance with the Environmental Architecture Model best practice
  • License Management and assignment within the Microsoft 365 tenant
  • Reusable Components including base solution, templates and branding, standard controls, etc.
  • Data Ecosystem of other data stores (e.g. data warehouse, other systems, etc.) we must consider

Dimensions of Enterprise Architecture

Authentication

Environments

The Environmental Architecture Model is the standard organizations whose Power Platform footprint has grown quite large and report struggling with a workable model architecting the platform across multiple environments. Questions such as “How do we ensure mission-critical workloads are not disturbed by citizen developers” and “Should an environment be owned by central IT or business units themselves” are common.

These challenges reinforce the importance of the “Enterprise Architecture” pillar in general and the importance of sound environmental planning from the very start of the adoption in particular.

It is important to segment all workloads by criticality. In other words, all production workloads across the organization should be categorized as either Productivity, Important, or Critical as shown below.

Segmenting Power Platform Workloads by Criticality

From there we can apply the Adoption Framework’s Environmental Architecture Model shown in the diagram below.

Environmental Architecture Model

Here we have “Centralized Responsibility” (less and more) running along the vertical axis. There are generally three levels of centralization of responsibility for an environment and the workloads that it contains:

  • Central IT. These environments are proactively managed by a central IT organization. The definition of “central” might vary depending on the organization (e.g. quasi-independent subsidiaries may have their own “Central IT”), but the fundamental idea here is that these environments are wholly owned by an element of the organization focused on IT.

  • Business Group. These environments are owned by the business units within the organization, though they are to be proactively managed by IT personnel embedded in those business units. In these scenarios, the central IT organization may have provisioned the environment or provided guidelines to business units in managing that environment, but the central premise is that them management and support of that environment is fundamentally the responsibility of the owning business unit.

  • Teams. These environments are associated to a team within Microsoft Teams, and provide a relational data service known as "Dataverse for Teams" (formerly "Project Oakdale) upon which Power Apps may be built inside of Teams. Think of this data service as "Dataverse Light". These environments are "owned" by Central IT to the extent in which they may be managed in the Power Platform Admin Center (PPAC) and visible in the CoE Starter Kit, though they are directly managed as part of the team to which they are associated. They are therefor fundamentally the responsibility of the owning team.

  • Localized Productivity. These environments are generally highly unmanaged, provisioned essentially for the purpose of providing citizen developers a place to deploy apps categorized as “Productivity”. Many organizations choose to use the Power Platform “Default” environment for this purpose, often re-naming that environment as “Productivity” or “Local Productivity”.

Beneath the tiers of responsibility we have the criteria of impact, complexity, and scope that we established above in segmenting workloads by criticality.

”Criticality of Workload” (lower to higher) runs along the horizontal axis. Environments should then be created in order to house apps that align the critical, important, and productivity categorization to ownership as shown in the diagram. Note that the three-stack of environments indicates that best practice here is establish application lifecycle management best practices (e.g. DEV > TEST > PROD)—-using automation where possible—-and to not ever commit changes directly in production.

Finally, the Environmental Architecture Model suggests best practice for data storage along the spectrum of criticality:

  • Excel should be used sparingly, if ever, and only to provide data storage for the most basic and temporary workloads.

  • SharePoint is an acceptable best practice data storage for productivity workloads built by citizen developers, but should be avoided for more advanced productivity and certainly for important and critical workloads.

  • Microsoft Dataverse is the data source native to Power Platform, and should be considered the default data source for important, critical, and some more sophisticated productivity workloads. Note that "Dataverse for Teams" environments inside of Teams can be fundamentally thought of as "Dataverse Light" in that they offer a similar relational database experience to Dataverse, but do not offer the full schema, and are feature limited.

  • Azure data services can provide an alternative to Dataverse in organizations with significant pre-existing Azure data investments / infrastructure. However, significant functionality--particularly within Power Apps (e.g. the ability to create model-driven apps)--is sacrificed when using Azure as a data source.

Microsoft has addressed this topic with additional guidance in a blog post of 30 October 2019: Establishing an Environment Strategy for Microsoft Power Platform.

License Management

Reusable Components

Data Ecosystem

Note: Much this model first appeared in and is expanded upon in the blog essay "Power Platform in a Modern Data Platform Architecture"

This is particularly important in complex data ecosystems common in large enterprises. Let's expand on the data ecosystem concept with a modern data platform architecture modeled as a loop or a cycle (rather than a linear flow), particularly when Power Platform solutions are leveraged to develop end-user solutions. These solutions are seamlessly integrated with the Microsoft Cloud and our modern data platform architecture.

The diagram below conceptualizes the data platform as involving several major areas including data collection, sources, ingestion, storage, analysis, and visualization. We’ll walk through each of them below. The goal here is to provide a platform-based model for taking data from the point of its creation to becoming actually valuable to an organization, across any industry. In other words, what’s our enterprise approach to deriving business value from the vast amount of data to which we have access?

Power Platform in a Modern Data Platform Architecture

It’s important to caveat that the model above is in no way all-inclusive of everything we can do in this space, all of the services and capabilities available to us, and all of the connections that can be made between discreet components shown (and not shown) on the diagram. Also note that the lines of demarcation between many of these components are not cut and dry. For example, Dataverse’s role as both source and storage necessarily places it in the middle of the diagram. The same could be said of Cosmos DB or Azure SQL (which doesn’t even appear as it’s own icon here), though the goal of this discussion is to model Power Platform’s role in that modern data ecosystem. Similarly, we’ve chosen to depict the model as a loop because Power Platform components are critical at both the start and the end as both the means of end-user data collection and the ultimate usability of that data (see “Visualization”) by humans.

Data Collection

Let’s begin atop the loop with the “Data Collection” area. These are the transactional solutions through which users collect and otherwise generate data. The use cases here are nearly infinite, but as hard examples we’ll offer a call center operator working a case with a customer, an insurance adjuster taking photos and geo-location data in the field, a nurse updating their patient’s medical records on administration of a vaccine, a maintenance technician completing an inspection or repair checklist on a bus or train, an employee interacting with a chatbot to update their HR particulars, or even a soldier being accounted for just prior to leaping from an airplane. The sky’s the limit (literally).

Our data platform model specifically points out that the native data service for these solutions is Microsoft Dataverse, sitting visually in the center of our loop because it interacts seamlessly with so many other components of our ecosystem. Dataverse further exchanges data with a number of storage technologies, including Data Lake, Cosmos DB, and Azure Blob Storage that we will discuss as we progress around the loop.

Data Sources

Dataverse is itself a structured database, so it pops up again in “Data Sources”, the next stop along our loop. Data is often not so conveniently gathered through the point solutions described in the data collection section above (though there is tremendous overlap between data collection and data sources, as is the case throughout this model). There are many (thousands or even more) sources from which data may be pulled into the platform.

Ingestion

At this point in the loop we are ingesting data from our structured, unstructured, and streaming sources which—remember—may include thousands of devices. It’s also important to note here that on-premise data sources are still prominent in many enterprises. These sources may be ingested as well, often requiring an intermediary data gateway to move data from on-premise to cloud (and use case dependent, back again). This is particularly true in regulated industries or public sector situations for which data classification or data sovereignty considerations must be accounted. Whatever the sources, data ingestion is about taking in vast amounts of data, establishing relationships with other data, making decisions on where to store it, and then actually getting it to where it needs to go. Oh, and we’re doing this billions (or more) times.

Storage

Data must be stored in an appropriate medium as it passes through our ingestion and integration points. Note that in the Power Platform context, as previously discussed, Dataverse is capable of integrating directly with these storage services. Think of it not as circumventing the ingestion stage, but rather as direct efficiencies that Microsoft has constructed between its service for structured application data (Dataverse) and several of its other cloud data storage capabilities. Though there are many possibilities here, we’ve identified several that work well in common scenarios.

Analysis

The entire goal of Power Platform in the broader data ecosystem is to access the insights and decision making made possible through “Analysis”. So it is in this stage where we really achieve value through the application of cognition around what is seen, spoken, and read in our data, machine learning, and through analysis of stream and customer data. It is also in this stage that we really begin to close the loop around Power Platform in our data ecosystem as we feed data directly the Dynamics Customer Insights which sits within the native Microsoft business applications / Power Platform sphere as part of the Dynamics 365 family of applications.

Visualization

The final stage in our data platform loop, “Visualization” is where we surface insights and make our analysis usable to humans in an interactive way. Our primary tool here is Power BI, one of Power Platform’s principal four native components (alongside Power Apps, Power Automate, and Power Virtual agents discussed previously). In Power BI we create richly interactive charts, dashboards, and reports drawing on our entire ecosystem of data (including Dataverse, as shown in the model). We also integrate, transform, and manipulate data here, though our ability to do this at scale is drastically enhanced by the cloud-based ingestion, storage, and analysis capabilities discussed previously.

Closing the Loop

Finally, because Power BI is integrated with the rest of Power Platform (and increasingly more so, for veterans of the technology who have been through this evolution) its components are embeddable inside of Power Apps—and vice versa—so that decision data and transactions within end-user applications may be linked. In other words, the customer service agent is taking action on a customer inside of an app based on data displayed to that agent in real time right alongside in the same end-user experience. Power Automate plays a role here, too, through which data in Power BI triggers automation to fire back in the “Data Collection” area of our model. Thus the insights surfaced to the user through Power BI—be they customer insights in the call center, vehicle telemetry for the adjuster in the field, patient medical insights in the clinic, fleet “out of service” data on the busses or rail, predictive analytics around employee churn, or qualification data for soldiers leaping from airplanes—those insights drive end user actions within apps, place contextual information front and center, and continually improve users’ ability to take action at the point of data collection.