Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find the source of "TelemetryChannel found a telemetry item without an InstrumentationKey" #2070

Open
rnarayana opened this issue Sep 25, 2020 · 58 comments
Assignees
Labels

Comments

@rnarayana
Copy link

rnarayana commented Sep 25, 2020

Microsoft.ApplicationInsights Version="2.15.0"
Microsoft.ApplicationInsights.AspNetCore Version="2.15.0"
Microsoft.ApplicationInsights.Kubernetes Version="1.1.2"
Microsoft.ApplicationInsights.NLogTarget Version="2.15.0"
  • Runtime version (e.g. net461, net48, netcoreapp2.1, netcoreapp3.1, etc. You can find this information from the *.csproj file):
    .Net Core 3.1.7
  • Hosting environment (e.g. Azure Web App, App Service on Linux, Windows, Ubuntu, etc.):
    Kubernetes (Rancher) and also on AKS on Alpine Linux

What are you trying to achieve?
Almost all the telemetry comes through properly, but I still get the trace "AI: TelemetryChannel found a telemetry item without an InstrumentationKey. This is a required field and must be set in either your config file or at application startup." at regular intervals
The key is set as the first step in ConfigureServices() itself in Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            var appInsightsKey = this.appConfig.GetConfig("App:ApplicationInsightsKey"); // Get my key
            services.AddAppInsights("LISTENER", appInsightsKey); // See extension method below
            services.AddControllers();
            ....
        }

        public static void AddAppInsights(this IServiceCollection services, string cloudRoleName,
            string instrumentationKey)
        {
            services.AddSingleton<ITelemetryInitializer>(new CloudRoleNameInitializer(cloudRoleName));
            services.AddApplicationInsightsTelemetry(o =>
            {
                o.InstrumentationKey = instrumentationKey;
                o.EnableAdaptiveSampling = false;
            });
            services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) =>
            {
                module.EnableSqlCommandTextInstrumentation = true;
            });
            services.AddApplicationInsightsTelemetryProcessor<HealthCheckExclusionFilter>();
            services.AddApplicationInsightsTelemetryProcessor<CustomPropertyFilter>();
            services.AddApplicationInsightsKubernetesEnricher();
        }

What have you tried so far?
I tried adding a TelemetryProcessor to try and inspect each item and figure out this, but did not get any leads.

@cijothomas
Copy link
Contributor

Can you share the full code showing how you enabled ApplicationInsights? Its not clear which nuget package you are using, and which code snippet is being used. Without seeing them, its not possible to give any useful response.

@rnarayana
Copy link
Author

My bad, I've updated the question with details. I thought adding the appinsights telemetry at the very beginning would make the key available everywhere, and am unable to track the intermittent error about not finding the key.

@poveilleux
Copy link

@rnarayana I have updated to v2.15.0 last night and I've started to see this problem happening. It is really hard to find the source, but I did not have the problem with v2.14.0, so I'll just revert to get back my telemetry

@rnarayana
Copy link
Author

@poveilleux Are you also on kubernetes?

@poveilleux
Copy link

@rnarayana I am, but I don't see how this would be related.

@cijothomas
Copy link
Contributor

@rnarayana Are you facing this issue only when deploying to AKS ? Are you able to seeing telemetry flowing correctly without issues when running locally?

@rnarayana
Copy link
Author

@cijothomas I see this issue when deploying to kuberentes (AKS and non-AKS). Note that I have all my other telemetry flowing in correctly, but this one particular message keeps coming in at exactly 15 minutes apart. I've checked multiple services, and in every service, this entry is there, every 15 minutes. I have run locally in Debug mode for more than 15 mins, but did not see this issue.

image

@cijothomas
Copy link
Contributor

@rnarayana The 15 min interval hints that this may have something to do with Heartbeat module..Can you check if you have heartbeats flowing for the app in aks/local? Heartbeats would be a "customMetric" with the name "heartbeat".

Also, if possible, can isolate the issue to particular version. Specifically - please confirm if you have repro with 2.15 only, or its repro in 2.14 as well? (There were changes in heartbeat modules in 2.15, so we want to narrow the scope down further to quickly reach the root cause). Appreciate your patience!

@cijothomas cijothomas self-assigned this Oct 12, 2020
@rnarayana
Copy link
Author

rnarayana commented Oct 13, 2020

I can confirm that the following two methods get rid of this issue:

  1. Use version 2.14 and set key via AddApplicationInsightsTelemetry().
  2. Use version 2.15, but set APPINSIGHTS_INSTRUMENTATIONKEY env. var instead of setting the key via AddApplicationInsightsTelemetry()

@vladislav-karamfilov
Copy link

We encountered the described issue in a SF stateless ASP.NET Core service that is using the Microsoft.ApplicationInsights.AspNetCore v2.15.0 package. We have a lot of logs of this type.

We have noticed 1 more thing that might be a different issue but might be related to this one. Some of the Trace logs have provider name <undefined>. These logs are duplicated with logs that have the correct provider name set but this is still cluttering our logs.

@vladislav-karamfilov
Copy link

vladislav-karamfilov commented Oct 15, 2020

We encountered the described issue in a SF stateless ASP.NET Core service that is using the Microsoft.ApplicationInsights.AspNetCore v2.15.0 package. We have a lot of logs of this type.

We have noticed 1 more thing that might be a different issue but might be related to this one. Some of the Trace logs have provider name <undefined>. These logs are duplicated with logs that have the correct provider name set but this is still cluttering our logs.

Reverting back to v2.14.0 as @poveilleux suggested in 1 of the above comments fixes the AI: TelemetryChannel found a telemetry item without an InstrumentationKey. This is a required field and must be set in either your config file or at application startup. issue for us too. The issue with <undefined> provider name still persists.

@cijothomas
Copy link
Contributor

Thanks everyone for reporting the issue. This would be a regression from 2.14, most likelyt in the heartbeat area. Wil investigate and provide fix.

@cijothomas cijothomas added the P1 label Oct 15, 2020
@yehiasalam
Copy link

im getting the same problem, reverted to 2.14.0 and everything worked

@rajkumar-rangaraj
Copy link
Member

@rnarayana, I started investigating this issue, but could not recreate in my environment. Adding the code as in issue description did not recreate an issue. I see a note stating issue does not get recreated in debug environment. Is it reproducible only in AKS? Could you please provide the steps the recreate an issue, this will help us investigate this issue faster.

Also it could help if you can provide below data.

  • Do you see the heart beat information logged in the failure case. If so, could you please provide the heart beat data. Heartbeat information gets logged into CustomMetrics table with the name - HeartbeatState

@cijothomas
Copy link
Contributor

We haven't fixed this yet, which means this won't be part of 2.16.

Will continue to get to the root cause for this, and will do 2.15.1 release for this, if this is confirmed to be a regression introduced by 2.15. (Similarly, it'll be part of 2.16.1 release as well)

2.16 cannot be delayed, as its a release done just to pick DiagnosticSource package version update to 5.0, which is releasing tomorrow.

@cijothomas cijothomas modified the milestones: 2.16, 2.17 Nov 9, 2020
@rnarayana
Copy link
Author

@rnarayana, I started investigating this issue, but could not recreate in my environment. Adding the code as in issue description did not recreate an issue. I see a note stating issue does not get recreated in debug environment. Is it reproducible only in AKS? Could you please provide the steps the recreate an issue, this will help us investigate this issue faster.

Also it could help if you can provide below data.

* Do you see the heart beat information logged in the failure case. If so, could you please provide the heart beat data. Heartbeat information gets logged into CustomMetrics table with the name - `HeartbeatState`

I'll send full repro by this weekend.

@rnarayana
Copy link
Author

rnarayana commented Nov 17, 2020

I've uploaded the repro here
Steps:

  1. Replace everything that says REPLACE_THIS
  2. Build project.
  3. Build docker
  4. Deploy helm to AKS.
  5. Wait for 30 minutes, check appinsights.

@mmulhearn
Copy link

My team is also seeing this on Microsoft.ApplicationInsights.AspNetCore 2.15.0 and we are trying to determine the cause as we have 1 API service exhibiting the issue and 3 that are not.

It's a very weird issue. If this is the issue, that the key isn't there, why is the iKey field filled in and correct? How are we also seeing legitimate logs next to it (for instance, in the middle of the items with the issue, we see a log telling us that one of our dependency requests was unsuccessful)?

Definitely sounds like a red herring to whatever the real issue is. At this point, we're pulling back slight configuration differences between the working and not working APIs to see if we can determine the cause.

@mmulhearn
Copy link

As we started to peel away, we found that removing this configuration from our api app in Azure fixed the issue, and re-adding it re-introduced the issue.

{
    "name": "ApplicationInsightsAgent_EXTENSION_VERSION",
    "value": "~2",
    "slotSetting": true
  }

@mmulhearn
Copy link

It would appear this configuration change causes the issue on a resource that is not currently exhibiting it:

  • Go to Application Insights on your app in Azure
  • Go to the Instrument your application section
  • Go to the .NET Core tab
  • Turn On Interop with Application Insights SDK (preview)

This caused the issue in our QA environment resource that was not previous exhibiting the issue.

@mmulhearn
Copy link

We have confirmed that Interop with Application Insights SDK (preview) is the issue, not the ApplicationInsightsAgent_EXTENSION_VERSION configuration.

@rajkumar-rangaraj
Copy link
Member

@rnarayana Thanks for the repro, it helped investigate an issue.

Workaround for this issue is to set EnableActiveTelemetryConfigurationSetup to true in ApplicationInsightsServiceOptions. For example,

services.AddApplicationInsightsTelemetry(o =>
{
     o.InstrumentationKey = instrumentationKey;
     o.EnableAdaptiveSampling = false;
     o.EnableActiveTelemetryConfigurationSetup = true;
});

Please note that still duplicate heartbeat is sent every 15 minutes and above proposed change will ensure that the duplicate heartbeat has InstrumentationKey and won't generate internal trace message.

Root Cause Microsoft.ApplicationInsights.NLogTarget package is creating an additional TelemetryConfiguration with HeartbeatProvider module registered. This causes two heartbeats to flow every 15 minutes instead of one. We will work on a fix to prevent the duplicate heartbeat. Please use the workaround to avoid internal trace message being logged in your component.

@eriksteinebach
Copy link

eriksteinebach commented Aug 20, 2021

Seeing the same message, but in our case it is in a netcore webjob project.

We are using the following code to setup Application Insights:

            var environment = configuration.GetEnvironment();
            var instrumentationKey = configuration.GetInstrumentationKey();
            builder.ConfigureLogging((context, loggingBuilder) =>
            {
                loggingBuilder.AddApplicationInsightsWebJobs(o =>
                {
                    o.InstrumentationKey = instrumentationKey;
                    o.EnableLiveMetrics = true;
                    o.SamplingExcludedTypes = "Exception";
                });
            });

Because it is not a aspnet project I don't have EnableActiveTelemetryConfigurationSetup available as a workaround. Any idea how to solve it in our case?

@AroglDarthu
Copy link

@cijothomas Any thoughts on a possible due date for the bugfix?

Will you make a fix available for 2.17.x? Just noticed that the issue has not been resolved in 2.17.0

We are experiencing the same issue. It is really annoying, mainly because developers do not notice anything is off when they only updated the package with a newer minor version. Also, if you just add Application Insights to a new application now, by default you will be referencing a faulty version :-(

Might want to re-add the P1 label (assuming that is the highest prio).

@jhubsharp
Copy link

jhubsharp commented Feb 8, 2022

We're seeing this same behavior in an Azure Function where we've added Application Insights to the logging path.

Here's where we've wired up AppInisghts to our logger:
builder.Services.AddLogging(logBuilder => { logBuilder.AddApplicationInsights(); });

Here's what the configuration in host.json looks like:

{
  "version": "2.0",
  "logging": {
    "logLevel": {
      "default": "Information"
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": false
      },
      "enableDependencyTracking": true,
      "dependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }
    }
  }
}

We've got older function apps on .NET Core 3.1 that aren't having this problem. I've only seen it on our .NET 6.0 functions. We have both an APPINSIGHTS_INSTRUMENTATIONKEY and an APPLICATIONINSIGHTS_CONNECTION_STRING in the function configuration.

@TimothyMothra
Copy link
Member

TimothyMothra commented Feb 11, 2022

Has anyone experienced this issue using just the SDK?
(ie. not using a logging adapter such as NLog or Log4Net.)

@cijothomas
Copy link
Contributor

We're seeing this same behavior in an Azure Function where we've added Application Insights to the logging path.

Here's where we've wired up AppInisghts to our logger: builder.Services.AddLogging(logBuilder => { logBuilder.AddApplicationInsights(); });

Here's what the configuration in host.json looks like:

{
  "version": "2.0",
  "logging": {
    "logLevel": {
      "default": "Information"
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": false
      },
      "enableDependencyTracking": true,
      "dependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }
    }
  }
}

We've got older function apps on .NET Core 3.1 that aren't having this problem. I've only seen it on our .NET 6.0 functions. We have both an APPINSIGHTS_INSTRUMENTATIONKEY and an APPLICATIONINSIGHTS_CONNECTION_STRING in the function configuration.

I dont think it is supported to do AddApplicationInsights() in AzureFunctions, as Functions already wire up Application Insights, and adding again would cause incorrect configs.

@Marusyk
Copy link

Marusyk commented Mar 1, 2022

I have the same issue with .NET6

<PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.17.0" />
<PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.17.0" />

then

string instrumentationKey = context.Configuration["Logging:ApplicationInsights:InstrumentationKey"];
if (isWorker)
{
	services.AddApplicationInsightsTelemetryWorkerService(instrumentationKey);
}
else
{
	services.AddApplicationInsightsTelemetry(instrumentationKey);
}

image

@Danieladu
Copy link

@cijothomas Any updates on this? Thanks!

@cijothomas
Copy link
Contributor

@cijothomas Any updates on this? Thanks!

The only update I have is that, based on #2070 (comment), this issue is occurring when using LoggingAdapters (except ILogger), in .NET Core. It'd be helpful if anyone has a repro outside of this.
(And in case of logging adapters, the workaround is as posted in the above comment. There are no plans to make the logging adapters integrate better with DI/ASP.NET Core.)

@Danieladu
Copy link

@cijothomas Any updates on this? Thanks!

The only update I have is that, based on #2070 (comment), this issue is occurring when using LoggingAdapters (except ILogger), in .NET Core. It'd be helpful if anyone has a repro outside of this. (And in case of logging adapters, the workaround is as posted in the above comment. There are no plans to make the logging adapters integrate better with DI/ASP.NET Core.)

Thanks for the quick reply!

@alsami
Copy link

alsami commented May 4, 2022

@cijothomas Any updates on this? Thanks!

The only update I have is that, based on #2070 (comment), this issue is occurring when using LoggingAdapters (except ILogger), in .NET Core. It'd be helpful if anyone has a repro outside of this. (And in case of logging adapters, the workaround is as posted in the above comment. There are no plans to make the logging adapters integrate better with DI/ASP.NET Core.)

We have the same problem with one service that is running on-premise that uses Serilog. Funny thing is, that it does not happen for services that run on Azure. Weird issue tbh.

@Danieladu
Copy link

@cijothomas Any updates on this? Thanks!

The only update I have is that, based on #2070 (comment), this issue is occurring when using LoggingAdapters (except ILogger), in .NET Core. It'd be helpful if anyone has a repro outside of this. (And in case of logging adapters, the workaround is as posted in the above comment. There are no plans to make the logging adapters integrate better with DI/ASP.NET Core.)

We have the same problem with one service that is running on-premise that uses Serilog. Funny thing is, that it does not happen for services that run on Azure. Weird issue tbh.

Check your appsettings in Azure. Maybe the appservice set a default app insight key for your service.

@alsami
Copy link

alsami commented May 6, 2022

My service is running on premise, logging to app insights. There is no app service.

@flower7434
Copy link

Setting the APPINSIGHTS_INSTRUMENTATIONKEY config value solved it for me. The connection string was already set and worked. .NET6

@Saibamen
Copy link

Setting the APPINSIGHTS_INSTRUMENTATIONKEY config value solved it for me. The connection string was already set and worked. .NET6

@FredrikDahlberg: Adding InstrumentationKey into appsettings.json, next to ConnectionString doesn't fix this problem :(

I'm also using .NET6 app, Microsoft.ApplicationInsights.AspNetCore version 2.21.0

@Saibamen
Copy link

Workaround for this issue is to set EnableActiveTelemetryConfigurationSetup to true in ApplicationInsightsServiceOptions.

@rajkumar-rangaraj: Doesn't work for me

@cijothomas
Copy link
Contributor

@Saibamen Could you share a repro app ? Its very hard to find whats wrong without seeing a repro.

@Saibamen
Copy link

Saibamen commented Dec 1, 2022

@cijothomas
Copy link
Contributor

@cijothomas: Repro app: https://github.com/Saibamen/AppInsightsTraceWarning

Thanks. It looks like you are using Serilog and Serilog sink for Application Insights. Could you remove it and see if the issue repro?

@Saibamen
Copy link

Saibamen commented Dec 2, 2022

@cijothomas: Still have this trace warning message :(

Code without Serilog is inside remove_serilog branch. You can see code changes in PR Saibamen/AppInsightsTraceWarning#1

@cijothomas
Copy link
Contributor

@cijothomas: Still have this trace warning message :(

Code without Serilog is inside remove_serilog branch. You can see code changes in PR Saibamen/AppInsightsTraceWarning#1

From a quick attempt, did not repro for me. Are you able to run the app locally (without docker or app service), and it still repros?

@Saibamen
Copy link

Are you able to run the app locally (without docker or app service), and it still repros?

No. Everything is fine locally (Visual Studio on Windows 10)

@cijothomas
Copy link
Contributor

@Saibamen I do not see a repro with the code shared. If things work locally, but not when you deploy to Docker, can you check if the appsettings.json is copied correctly in docker?

@pushkarajawad
Copy link

Any updates on this issue, we are still facing this issue in 2.21.0 version.

@mikeblakeuk
Copy link

mikeblakeuk commented Feb 13, 2023

Any news on this?
Also, why add "AddResponseCompression" in the sample? Is that to force the app to emulate being inside a AKS that is not using https?

@kristina-devochko
Copy link

kristina-devochko commented Mar 18, 2023

@cijothomas Hi, I can confirm that the same issue is happening for .NET 6 worker services and .NET 4.7.2 console apps with NLog provider (Microsoft.ApplicationInsights.NLogTarget 2.21.0) and Application Insights SDK 2.21.0.

Another weird thing is that I can't reproduce it when testing locally - then it works perfectly and data is being sent as expected (both Debug and Release build output), but once it's deployed on the servers, nothing is logged except for the lacking instrumentation key message.

Can you please hint to what workaround can be used for non-web apps? #2070 solved it for ASP.NET Core 6 web apps but the non-web apps are only logging this "TelemetryChannel found a telemetry item without an InstrumentationKey" message. It's non-Azure services, regular Windows services and console apps on-prem that are sending data to an ApplicationInsights instance in Azure.

Can't find any useful info on where to start fixing this as we have followed official guidance on setting up Application Insights with worker services: https://learn.microsoft.com/en-us/azure/azure-monitor/app/worker-service
From this thread I see that I'm not the only one experiencing this issue with non-web apps and NLog provider. Would really appreciate help here a lot.

@eriksteinebach maybe you were able to find a solution, ref. your comment #2070 (comment) ? It would be so helpful if you could share how you approached this!

@Saibamen
Copy link

@Saibamen I do not see a repro with the code shared. If things work locally, but not when you deploy to Docker, can you check if the appsettings.json is copied correctly in docker?

Yes, it is copied correctly.

@MaxPrisich
Copy link

MaxPrisich commented Oct 17, 2023

Hi All,
I also faced this issue on .NET 7 Worker Service project. Like it was already mentioned by @kristina-devochko @eriksteinebach workaround for web apps doesn't apply for the worker service. So I came up with the following workaround to manually set the connection string for NLog configuration:

private static void AddLoggingAndTelemetry(this IServiceCollection services, IConfiguration configuration)
{
    services.AddApplicationInsightsTelemetryWorkerService();
    services.AddLogging(b =>
    {
        b.ClearProviders();
        b.SetMinimumLevel(LogLevel.Trace);
        b.AddNLog(configuration);
    });

    foreach (var target in NLog.LogManager.Configuration.AllTargets.OfType<ApplicationInsightsTarget>())
    {
        var config = target.GetPrivateField("telemetryClient").GetPrivateField("configuration") as TelemetryConfiguration;
        config.ConnectionString = configuration["ApplicationInsights:ConnectionString"];
    }
}

private static object GetPrivateField(this object o, string field)
    => o.GetType().GetField(field, BindingFlags.NonPublic | BindingFlags.Instance).GetValue(o);

This solution worked for me.

@jannikbeibl
Copy link

As for now, this issue still persists for worker services. I found out that the NLog target is using the (deprecated) default constructor of the TelemetryClient class which takes the singleton TelemetryConfiguration.Active as the parameter for the base class constructor. (see here and here)

Although the static TelemetryConfiguration.Active property is also marked as obsolete, you can set the ConnectionString on the Active property which is then taken into account by the NLog target. This may be a bit better than using reflection for setting the property.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests