Skip to content
ApplicationInsights SDK for ServiceFabric projects
C# PowerShell Batchfile
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Keys Initial codebase Apr 19, 2017
Schema Initial codebase Apr 19, 2017
media/readme Adding content to the readme file May 10, 2017
src Fix nuspec authoring. 1) Update ServiceFabric dependency version. 2) … Jun 25, 2019
.gitignore Fix a merge issue Apr 21, 2017
AddXmlLanguage.targets Initial codebase Apr 19, 2017
ApplicationInsightsSDKRules.ruleset Add CI support (#2) Apr 20, 2017
Common.props Initial codebase Apr 19, 2017
Common.targets Initial codebase Apr 19, 2017
Directory.Build.props Initial codebase Apr 19, 2017
EnlistmentRoot.marker Initial codebase Apr 19, 2017
GlobalStaticVersion.props Bump up version to 2.3.1 Jun 25, 2019
LICENSE Initial commit Apr 11, 2017
Microsoft-Security-Recommended.ruleset Add CI support (#2) Apr 20, 2017
NuGet.config Update nuget feed to use V3 API Apr 18, 2019
NuGet.exe Fix nuspec authoring. 1) Update ServiceFabric dependency version. 2) … Jun 25, 2019
NuGet.targets Initial codebase Apr 19, 2017
Package.targets Add CI support (#2) Apr 20, 2017
Product.props Initial codebase Apr 19, 2017
README.md Added link to "Enable Application Insights Live Metrics From Code" Apr 6, 2018
Signing.props Initial codebase Apr 19, 2017
Test.props Initial codebase Apr 19, 2017
buildDebug.cmd Fixing target platforms. May 8, 2017
buildRelease.cmd Fixing target platforms. May 8, 2017
dirs.proj Add CI support (#2) Apr 20, 2017
findMsBuild.cmd Add CI support (#2) Apr 20, 2017

README.md

Microsoft Application Insights for Service Fabric

This repository hosts code for functionality that augments Application Insights telemetry experience for Service Fabric.

Problem Statement

Application Insights is a service that lets you monitor your live application's performance and usage. ApplicationInsight SDKs is a family of nuget packages that lets you collect and send telemetry from your applications. Information about SDKs for different platforms and languages is available at the Application Insights SDK page.

When the above mentioned SDKs collect application telemetry, they do not assume and record an application specific context because the context is environment specific. For microservices running inside service fabric, it is important to be able to recognize what service context the telemetry is generated in. For example request or dependency telemetry would not make sense without context information like service name, service type, instance / replica ids etc.

Solving the context problem

This repo provides code for a telemetry initializer (and some associated utility classes), shipped as nuget packages, specifically designed for Service Fabric. The initializer when used as described in the following sections, automatically augments telemetry collected by the different application insights SDKs that might be added to a service. For general information on Application Insights Telemetry Initializers follow this blog entry.

Nuget Packages

This repository produces the following two nuget packages:

Scope

  • This repository and the associated nuget packages, for now, only deals with .Net Framework and .Net Core applications.
  • The nuget packages does not auto-collect or generate any telemetry. It merely adds-to the telemetry generated by other sources - such as .Net Web SDK etc. or generated by the user directly.

Microsoft.ApplicationInsights.ServiceFabric- For Service Fabric Lift and Shift Scenarios

You can deploy almost any existing application to Service Fabric as either a guest executable or guest container service. These are also sometimes referred to as lift and shift applications. Add the Microsoft.ApplicationInsights.ServiceFabric nuget to your guest executable / container services.

.Net Framework

For.Net applications, applying the nuget package will automatically add the telemetry initializer to the ApplicationInsights.config file. The following sample shows the new entry added in context of other entries added by other AI SDK nugets.

<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
  <InstrumentationKey>...</InstrumentationKey>
  <TelemetryInitializers>
    <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector"/>

<!-- **************************************************************************************************************** -->
    <Add Type="Microsoft.ApplicationInsights.ServiceFabric.FabricTelemetryInitializer, Microsoft.AI.ServiceFabric"/>
<!-- **************************************************************************************************************** -->

  </TelemetryInitializers>
  <TelemetryModules>
    <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector" />
  </TelemetryModules>
</ApplicationInsights>

OWIN applications: Adding support to requests, dependencies and live stream

In order to have requests and dependencies showing in Application Insights Live Stream in OWIN applications there are a few additional steps required:

  1. Ensure the packages Microsoft.ApplicationInsights.PerfCounterCollector, Microsoft.ApplicationInsights.DependencyCollector and ApplicationInsights.OwinExtensions are installed
  2. Customize the ApplicationInsights.config file to include additional modules and the telemetry processor
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
  <InstrumentationKey>...</InstrumentationKey>
  <TelemetryInitializers>
    <Add Type="Microsoft.ApplicationInsights.ServiceFabric.FabricTelemetryInitializer, Microsoft.AI.ServiceFabric"/>      
    <Add Type="ApplicationInsights.OwinExtensions.OperationIdTelemetryInitializer, ApplicationInsights.OwinExtensions"/>
    <Add Type="Microsoft.ApplicationInsights.ServiceFabric.CodePackageVersionTelemetryInitializer, Microsoft.AI.ServiceFabric.Native"/>
  
    <!-- In order to collect dependencies include this and the nuget package Microsoft.ApplicationInsights.DependencyCollector -->
    <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector"/>
  </TelemetryInitializers>
  <TelemetryModules>
    <!-- In order to collect dependencies include this and the nuget package Microsoft.ApplicationInsights.DependencyCollector -->
    <Add Type="Microsoft.ApplicationInsights.DependencyCollector.DependencyTrackingTelemetryModule, Microsoft.AI.DependencyCollector"/>    
    
    <!--  In order to collect performance/request information include this and the nuget package Microsoft.ApplicationInsights.PerfCounterCollector -->
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector"/>
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector"/>    
  </TelemetryModules>
  <TelemetryProcessors>
    <!-- Adds support to live stream -->
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryProcessor, Microsoft.AI.PerfCounterCollector"/>
  </TelemetryProcessors>
</ApplicationInsights>

For more information check this post

.Net Core

The AI .Net Core SDK's configuration model is quite different from .Net framework's AI SDK. Almost all configuration for .NET Core is done in code. For example the AI SDK for ASP.net Core provides UseApplicationInsights() utility method that lets you set things up in code. When using the service fabric specific nuget package, simply make sure to register the ServiceFabricTelemetryInitializer through dependency injection before calling UseApplicationInsights() method as shown below:

public static void Main(string[] args)
{
    var host = new WebHostBuilder()
        .UseKestrel()
        
        // Adding Service Fabric Telemetry Initializer
        .ConfigureServices(services => services.AddSingleton<ITelemetryInitializer>((serviceProvider) => new FabricTelemetryInitializer()))
        
        .UseContentRoot(Directory.GetCurrentDirectory())
        .UseIISIntegration()
        .UseStartup<Startup>()

        // Configuring Application Insights
        .UseApplicationInsights()
        
        .Build();

    host.Run();
}

The nuget package reads the context from environment variables provided to the guest executable / guest container. The context added looks like the following:

Context fields for guest containers as shown on application insights portal

Microsoft.ApplicationInsights.ServiceFabric.Native - For Service Fabric Reliable Services

Using reliable services framework can give the user, among other things, super high resources density, where a single process may host multiple microservices or even multiple instances of a single service. This is different from lift and shift scenarios where likely a single container instance is running an instance of a single microservice. Microsoft.ApplciationInsights.ServiceFabric.Native relies on Microsoft.ApplicationInsights.ServiceFabric nuget but provides additional utility methods that let you propagate the context from the ServiceContext object to the telemetry initializer.

.Net Framework

Because Microsoft.ApplicationInsights.ServiceFabric.Native package installs the Microsoft.ApplicationInsights.ServiceFabric package as dependency, the telemetry initializer would already to registered in the ApplicationInsights.config file as described in the section for Microsoft.ApplicationInsights.ServiceFabric package above. In your service entry points, you should use the FabricTelemetryInitializerExtension.SetServiceCallContext(ServiceContext) provided by Microsoft.ApplicationInsights.ServiceFabric.Native package. This will make sure that desired context is retrieved from the ServiceContext object is propagated through all threads spawned from the service entry-point forth and is added to the outgoing telemetry.

protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
    FabricTelemetryInitializerExtension.SetServiceCallContext(this.Context);

    return new ServiceInstanceListener[]
    {
        new ServiceInstanceListener(serviceContext => new OwinCommunicationListener(Startup.ConfigureApp, serviceContext, ServiceEventSource.Current, "ServiceEndpoint"))
    };
}

-OR-

protected override Task RunAsync(CancellationToken cancellationToken)
{
    FabricTelemetryInitializerExtension.SetServiceCallContext(this.Context);

    return base.RunAsync(cancellationToken);
}

.Net Core

For .net Core, simply create the telemetry initializer using the FabricTelemetryInitializerExtension.CreateFabricTelemetryInitializer(ServiceContext)and register it as an ITelemetyInitializer for dependency injection before any calls that configure the applciation insights pipeline.

protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
    return new ServiceInstanceListener[]
    {
        new ServiceInstanceListener(serviceContext =>
            new WebListenerCommunicationListener(serviceContext, "ServiceEndpoint", url =>
            {
                ServiceEventSource.Current.ServiceMessage(serviceContext, $"Starting WebListener on {url}");

                return new WebHostBuilder().UseWebListener()
                            .ConfigureServices(
                                services => services
                                    .AddSingleton<StatelessServiceContext>(serviceContext)
                                    .AddSingleton<ITelemetryInitializer>((serviceProvider) => FabricTelemetryInitializerExtension.CreateFabricTelemetryInitializer(serviceContext)))
                            .UseContentRoot(Directory.GetCurrentDirectory())
                            .UseStartup<Startup>()
                            .UseApplicationInsights()
                            .UseUrls(url)
                            .Build();
            }))
    };

}

The context added looks like: Context fields for reliable services as shown on application insights portal

Note: Cloud role name and Cloud role instance shown in the screenshot above, represent service and service instance respectively, and are special fields in Application Insights schema that are used as aggregation units to power certain experiences like Application map.

Trace Correlation with Service Remoting

The Nuget package enables correlation of traces produced by Service Fabric services, regardless whether services communicate via HTTP or via Service Remoting protocol. For HttpClient, the correlation is supported implicitly. For Service Remoting, you just need to make some minor changes to your existing code and the correlation will be supported.

There is a difference in initialization depending on whether you are using Remoting V2 stack or not. You can follow this page to upgrade from Remoting V1 to Remoting V2.

Note: The V2 attribute name is different for Reliable Services and Actors. i.e., FabricTransportServiceRemotingProvider and FabricTransportActorRemotingProvider.

Remoting V2

Once you are using Remoting V2, you just need to add two more telemetry modules to your project.

For .Net Framework, the modules will be added automatically to the ApplicationInsights.config when you install the nuget package:

<TelemetryModules>
    <Add Type="Microsoft.ApplicationInsights.ServiceFabric.Module.ServiceRemotingRequestTrackingTelemetryModule, Microsoft.AI.ServiceFabric.Native"/>
    <Add Type="Microsoft.ApplicationInsights.ServiceFabric.Module.ServiceRemotingDependencyTrackingTelemetryModule, Microsoft.AI.ServiceFabric.Native"/>
</TelemetryModules>

For .Net Core, add the modules at the same place where you add the telemetry initializer:

ConfigureServices(services => services
    ...
    .AddSingleton<ITelemetryModule>(new ServiceRemotingDependencyTrackingTelemetryModule())
    .AddSingleton<ITelemetryModule>(new ServiceRemotingRequestTrackingTelemetryModule())
)

Remoting V1

If you want to stick to Remoting V1 stack, you can alternatively change your code to use the correlating proxy and message handler.

  1. For the service invoking the request, change how the proxy is created:
    // IStatelessBackendService proxy = ServiceProxy.Create<IStatelessBackendService>(new Uri(serviceUri));
    var proxyFactory = new CorrelatingServiceProxyFactory(this.serviceContext, callbackClient => new FabricTransportServiceRemotingClientFactory(callbackClient: callbackClient));
    IStatelessBackendService proxy = proxyFactory.CreateServiceProxy<IStatelessBackendService>(new Uri(serviceUri));
    
  2. For the service receiving the request, add a message handler to the service listener:
    protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
    {
        return new ServiceInstanceListener[1]
        {
            new ServiceInstanceListener(context => new FabricTransportServiceRemotingListener(context, new CorrelatingRemotingMessageHandler(context, this)))
        };
    }
    

Enrich telemetry with Component Version

In order to have telemetry enriched with component version based on the code package version of the Service Fabric Reliable Service, add the following xml snippet to the <TelemetryInitializers> section:

<Add Type="Microsoft.ApplicationInsights.ServiceFabric.CodePackageVersionTelemetryInitializer, Microsoft.AI.ServiceFabric.Native"/>

Adding component version to telemetry items makes it possible to track telemetry across different versions of running services.

Customizing Context:

< to be added >

Branches

  • master contains the latest published release located on NuGet.
  • develop contains code for the next release.

Contributing

Report Issues

Please file bugs, discussion or any other interesting topics in issues.

Developing

We strongly encourage contributions.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

You can’t perform that action at this time.