Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermittent ApplicationInsights:Sender [ [Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE] ] #180

Closed
annanhan opened this issue Jan 27, 2017 · 53 comments
Labels
Milestone

Comments

@annanhan
Copy link

While using applicationinsights@0.17.2, I noticed these errors in our logs. The node app is hosted on Heroku using an SSL endpoint addon. After doing some research, it looks like node isn't accepting the server's cert. The messages were getting to Application Insights as far as I could tell.

Additionally, it caused a huge memory leak. After I disabled the package, the memory profile was back to normal.

Here are the other packages we're using:
"engines": {
"node": "^0.10.28",
"npm": "^1.4.9"
},
"dependencies": {
"amqp.channel": "^0.0.8",
"applicationinsights": "^0.17.2",
"args-list": "^0.3.3",
"bluebird": "^2.1.2",
"cookie-parser": "^1.3.4",
"cool-ascii-faces": "^1.3.3",
"cors": "^2.3.1",
"ddi": "^1.1.0",
"express": "^4.8.0",
"express-winston": "^1.0.0",
"formdefutils": "^0.0.2",
"gift": "^0.6.0",
"github-webhook-handler": "^0.5.0",
"glob": "^4.3.2",
"le_node": "^1.0.12",
"lodash": "^3.10.1",
"mailgun-js": "^0.7.7",
"mapquest": "^0.2.0",
"moment": "^2.10.3",
"multiparty": "^4.1.2",
"node-xlsx": "^0.5.1",
"phone-formatter": "0.0.2",
"request": "^2.34.0",
"rimraf": "^2.2.8",
"romis": "^2.0.0",
"sprintf": "^0.1.4",
"urlencoded-request-parser": "^1.0.1",
"us": "^1.0.3",
"uuid": "^2.0.1",
"winston": "^1.0.0",
"xml2js": "~0.4.4",
"yamljs": "^0.2.1"
}

@SergeyKanzhelev
Copy link

@annanhan thanks for reporting. So Application Insights fail to communicate to the server. Does it repro on all computers or on your dev box? Do you think this is related to this issue: #177?

Can you please try to open https://dc.services.visualstudio.com/api/ping from the same computer and see whether cert is trusted.

image

Also please check explicitly whether this page can be opened: https://baltimore-cybertrust-root.digicert.com/

If your browser loads this page without warning, it trusts the Baltimore CyberTrust Root.

W.r.t. memory leak - if SDK failed to communicate to the backend it should start dropping items from the queue. @KamilSzostak to confirm this behavior.

@annanhan
Copy link
Author

I can't reproduce on my box. We are running v0.17.2 on another app that is running in Heroku and that seems to be running fine, but I think the difference is that that isn't using an SSL endpoint.

I am able to hit both of those pages from my dev box.

@SergeyKanzhelev
Copy link

We only have an SSL endpoint - http is not supported.

Which environment does it repro? Does it repro constantly or intermittedly?

@annanhan
Copy link
Author

It seemed intermittent.

Sorry, I should have been more clear.
App A, which is running v0.17.2 and is not showing memory problems or cert issues, is using HTTPS with no SSL certs
App B, which was running v0.17.2 and had memory problems and cert issues, is using HTTPS with an SSL cert.

Although, I just noticed that App A is using node v6.9.1 and App B is using node v0.10.28. I'll try updating the node package and see if that helps.

@annanhan
Copy link
Author

So it looks like the memory was resolved with upgrading Node to v6.9.1, but I'm still getting the cert issue errors.

@SergeyKanzhelev
Copy link

IT's good to hear that memory issues got resolved.

Are those cert issues evenly distributed over time? Or they somehow time-grouped?

I see that cert is OK on endpoint: https://sslanalyzer.comodoca.com/?url=https%3A%2F%2Fdc.services.visualstudio.com%2Fapi%2Fping

I also found this thread with the recommendation to do:

npm install ssl-root-cas

var sslRootCAs = require('ssl-root-cas/latest')
sslRootCAs.inject()

@cmdkoh
Copy link

cmdkoh commented Feb 13, 2017

Also getting this error in my environment hosted in Azure. Following is the error log
Environment Info:
npm applicationinsights: version 0.18.0
os: Ubuntu 16.04.1 LTS
node: v6.5.0

ApplicationInsights:Sender [ { Error: unable to verify the first certificate
    at Error (native)
    at TLSSocket.<anonymous> (_tls_wrap.js:1060:38)
    at emitNone (events.js:86:13)
    at TLSSocket.emit (events.js:185:7)
    at TLSSocket._finishInit (_tls_wrap.js:584:8)
    at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:416:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]

@SergeyKanzhelev
Copy link

@cmdkoh would it be possible to try my recommendation above? I wonder if it's a problem of missing SSL intermediate certificate on Ubuntu image in Azure

@cmdkoh
Copy link

cmdkoh commented Feb 13, 2017

@SergeyKanzhelev Sure, will report back the finding...

@cmdkoh
Copy link

cmdkoh commented Feb 13, 2017

@SergeyKanzhelev i made the changes and it did the following

...
Skipped 19 untrusted certificates.
Processed 173 certificates.
Wrote 173 certificates in '../myapp/node_modules/ssl-root-cas/pems/'.
Wrote '../myapp/node_modules/ssl-root-cas/ssl-root-cas-latest.js'.
##########################################################################################
#                                                                                        #
#  Downloaded the latest Root Certificate Authorities. Restart your server to use them.  #
#                                                                                        #
##########################################################################################
...

unfortunately the "unable to verify the first certificate" still showing up after I restarted the app...

@SergeyKanzhelev
Copy link

Does curl works fine? curl -I https://dc.services.visualstudio.com/api/ping Any SSL errors reported by it?

@cmdkoh
Copy link

cmdkoh commented Feb 13, 2017

yes, response looks fine (200 OK)

@SergeyKanzhelev
Copy link

So no events can be sent or only some connections ends up with this error?

@SergeyKanzhelev
Copy link

@cmdkoh I'm actually out of ideas and get to the last page of bing search =). So if you have any - let us know. @KamilSzostak @OsvaldoRosado could you try Ubuntu image on Azure for a repro?

@cmdkoh
Copy link

cmdkoh commented Feb 14, 2017

@SergeyKanzhelev It seems I am still able to post telemetry to applicationinsights, perhaps is other connections (other auto collections? but I did turned them off...) end up with this error. What I have to do is add logging prior to posting telemetry and then cross check with any SSL error and telemetry being posted in Azure.

@SergeyKanzhelev
Copy link

@cmdkoh I didn't fully understand your solution. We do not have any other endpoints our node.js SDK will talk to. Perhaps it's something node is doing by itself or some other module? (like this issue where somehow extra call was injected to node.js running as Azure Web Site: #144)

Can you please explain or even copy/paste the code snippet on what you are logging prior to posting telemetry?

@jeffwilcox
Copy link
Contributor

Still seeing this. Production App Service deployment having intermittent errors...

@kellylawson
Copy link

kellylawson commented Feb 22, 2017

@SergeyKanzhelev I've also been running into this issue in my local; the telemetry seems to be showing up in Azure properly, but I see intermittent warnings about ssl cert failures. I tried the recommendation from the SO question you linked above, but that didn't help. However, digging into ssl-root-cas a little I noticed this section, which mentions that this error generally means the server is misconfigured.

That section mentions to try adding the intermediate certs on the client to compensate for the server. I tried that by following the cert link you included above to this page and using the DigiCert Baltimore CA-1 G2 and DigiCert Baltimore CA-2 G2 intermediate certs, and it still unfortunately errors intermittently.

So I rolled all that back and added some logging to the location in applicationinsights that is throwing the error. I log the destination URL before the request is made, then log the options and the error message when the cert error is thrown. I get logs that look like the following:

2017-02-22T05:37:06.394Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:37:26.398Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:37:46.401Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:38:06.402Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:38:26.408Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:38:46.415Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:39:06.428Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:39:26.419Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:39:26.731Z - Request options were {"host":"dc.services.visualstudio.com","port":null,"path":"/v2/track","method":"POST","headers":{"Content-Type":"application/x-json-stream","Content-Encoding":"gzip","Content-Length":1771},"disableAppInsightsAutoCollection":true}
2017-02-22T05:39:26.732Z - Error: unable to verify the first certificate
ApplicationInsights:Sender [ { Error: unable to verify the first certificate
      at Error (native)
      at TLSSocket.<anonymous> (_tls_wrap.js:1079:38)
      at emitNone (events.js:86:13)
      at TLSSocket.emit (events.js:185:7)
      at TLSSocket._finishInit (_tls_wrap.js:603:8)
      at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:433:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
2017-02-22T05:39:46.422Z - Sending insights request to https://dc.services.visualstudio.com/v2/track
2017-02-22T05:40:06.424Z - Sending insights request to https://dc.services.visualstudio.com/v2/track

I'm wondering if these requests are being load balanced to different servers and some of them are not configured correctly? It seems odd that these wouldn't fail consistently. I admit I'm not very knowledgeable about SSL certs, but maybe someone could look in the logs for dc.services.visualstudio.com/v2/track at the timestamps I have pasted above and see if there is any consistency in errors coming from specific VMs? I also ran into errors at 2017-02-22T05:40:26.685Z, 2017-02-22T05:45:46.744Z, and 2017-02-22T05:51:06.798Z.

I'm actually not even sure what these requests are, my local server isn't handling any traffic while I'm logging, but it still seems to ping dc.services.visualstudio.com/v2/track regularly 3 times per minute. If there's anything I can log on my side to help get to a solution let me know; I'd like to get to the bottom of this.

@SergeyKanzhelev
Copy link

Ok, I'm able to repro the issue locally with the small loop - need to wait a while though. I enabled tracing, but it didn't give me much. I doubt that the assumption that server is misconfigured is correct. We do not see this issue on other SDKs.

I'll keep digging. We'll schedule the work to check every server errors (there are a lot =)) as well. @OsvaldoRosado @KamilSzostak do you have other ideas how to troubleshoot the issue?

var http = require('http');
var url = require('url');
var sleep = require('system-sleep');    

var appInsights = require("applicationinsights");

//everything got fixed with this setting:
//process.env['NODE_TLS_REJECT_UNAUTHORIZED'] = '0';

appInsights.enableVerboseLogging();

var client = appInsights.getClient("key");
client.config.maxBatchSize  = 1;

while (true) {
    console.log("test");
    client.trackEvent("test");

    sleep(300);
}

I enabled tracing by $env:Node_Debug="tls,fs,net" and got two sets of traces that unfortunately doesn't give much details on which certificate is considered bad:

Successful

ApplicationInsights:Sender [ { host: 'dc.services.visualstudio.com',
    port: null,
    path: '/v2/track',
    method: 'POST',
    headers:
     { 'Content-Type': 'application/x-json-stream',
       'Content-Encoding': 'gzip',
       'Content-Length': 341 } } ]
NET 23904: pipe false undefined
NET 23904: connect: find host dc.services.visualstudio.com
NET 23904: connect: dns options { family: undefined, hints: 3072 }
NET 23904: _read
NET 23904: _read wait for connection
NET 23904: afterConnect
TLS 23904: start
NET 23904: _read
NET 23904: Socket._read readStart
TLS 23904: secure established
NET 23904: afterWrite 0
NET 23904: afterWrite call cb
NET 23904: onread 619
NET 23904: got data
NET 23904: _read
NET 23904: onSocketFinish
NET 23904: oSF: not ended, call shutdown()
NET 23904: destroy undefined
NET 23904: destroy
NET 23904: close
NET 23904: close handle
ApplicationInsights:Sender [ '{"itemsReceived":1,"itemsAccepted":0,"errors":[{"index":0,"statusCode":400,"message":"Invalid instrumentation key"}]}' ]
NET 23904: afterShutdown destroyed=true ReadableState {
  objectMode: false,
  highWaterMark: 16384,
  buffer: [],
  length: 0,
  pipes: null,
  pipesCount: 0,
  flowing: true,
  ended: false,
  endEmitted: false,
  reading: true,
  sync: false,
  needReadable: true,
  emittedReadable: false,
  readableListening: false,
  resumeScheduled: false,
  defaultEncoding: 'utf8',
  ranOut: false,
  awaitDrain: 0,
  readingMore: false,
  decoder: null,
  encoding: null }
NET 23904: emit close

Unsuccessful

ApplicationInsights:Sender [ { host: 'dc.services.visualstudio.com',
    port: null,
    path: '/v2/track',
    method: 'POST',
    headers:
     { 'Content-Type': 'application/x-json-stream',
       'Content-Encoding': 'gzip',
       'Content-Length': 342 } } ]
NET 15192: pipe false undefined
NET 15192: connect: find host dc.services.visualstudio.com
NET 15192: connect: dns options { family: undefined, hints: 3072 }
NET 15192: _read
NET 15192: _read wait for connection
NET 15192: afterConnect
TLS 15192: start
NET 15192: _read
NET 15192: Socket._read readStart
TLS 15192: secure established
NET 15192: destroy { [Error: unable to verify the first certificate] code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' }
NET 15192: destroy
NET 15192: close
NET 15192: close handle
NET 15192: afterWrite -4047
NET 15192: afterWrite destroyed
NET 15192: emit close
ApplicationInsights:Sender [ { [Error: socket hang up] code: 'ECONNRESET' } ]
ApplicationInsights:Sender [ { [Error: unable to verify the first certificate] code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
NET 15192: destroy undefined
NET 15192: destroy
NET 15192: already destroyed, fire error callbacks

@SergeyKanzhelev
Copy link

Update: we found some SSL-related messages on servers. Investigating whether they are relevant to the issue.

@kellylawson
Copy link

kellylawson commented Feb 23, 2017 via email

@SergeyKanzhelev
Copy link

@KamilSzostak can you please make sure SDK re-tries on this error.

We are still investigating the server side issue

@SergeyKanzhelev
Copy link

Update: current thinking is that the issue may be related to this

the problem stems from the rather insane "feature" that IIS decides which are the intermediate certificates for your certificate chain "automatically". And it gets it wrong. In my case I have:

Root CA -> Intermediate CA1 -> Intermediate CA2 -> server certificate

For whatever reasons my Windows Azure server had tons of pre-installed CA certificates in its trusted store, including some, let's call it RootCA2. Now the issue was that Intermediate CA2 was signed also by RootCA2. So from the point of view of the Azure server, the chain was:

RootCA2 -> Intermediate CA2 -> server certificate

... and as you might guess, IIS decided not to send Internediate CA1 at all in the certificate chain. Now, most real clients (modern browsers, devices, etc), didn't have the RootCA2 pre-installed (but do have the Root CA) and as a result got a broken chain.

We keep investigating the issue.

@dgilling
Copy link

We get this on Azure Web Apps also for a Node Server.

@superlime
Copy link

Hack/workaround based on Sergey's investigation, that seems to be working for me:

  1. Use openssl to download the missing certificate (followed some directions here. This is the one I extracted, which should be valid until 12/2017:
    msit_cert.txt

  2. Install the ssl-root-cas package with npm i ssl-root-cas --save

  3. Load ssl-root-cas, and append the MSIT cert:

var rootcas = require('ssl-root-cas').create();
// Temp fix for appInsights TLS error.  See https://github.com/Microsoft/ApplicationInsights-node.js/issues/180
rootcas.addFile(__dirname + "/msit_cert.txt");
require('https').globalAgent.options.ca = rootcas;

Like I said, seems to have fixed things by trusting the Intermediate CA2 from Sergey's investigation. I haven't seen the transient errors since adding this, but your mileage may vary. :)

@gclifford
Copy link

I am getting the same error on a Linux VM (Ubuntu) on azure.

@annanhan
Copy link
Author

@SergeyKanzhelev any update on this issue? It seems to come and go. Do you need more information to help with the investigation?

@SergeyKanzhelev
Copy link

@OsvaldoRosado said it doesn't repro any longer. Osvaldo?

@OsvaldoRosado
Copy link
Member

OsvaldoRosado commented Mar 28, 2017

It wasn't occurring for some time, but it seems to have returned. Root cause for the missing intermediate certificate is still unclear.

@AlexBulankou AlexBulankou added this to the 0.22.0 milestone Aug 18, 2017
@AlexBulankou AlexBulankou modified the milestones: Future, 0.22.0 Aug 30, 2017
@OsvaldoRosado
Copy link
Member

Closing this as it doesn't appear to occur any longer from my own testing and no new reports have been received in many months. If anyone is still seeing this please re-open!

@cmdkoh
Copy link

cmdkoh commented May 2, 2018

This is still happening, see logs below

| ApplicationInsights:Sender [ { Error: unable to verify the first certificate
| at TLSSocket. (_tls_wrap.js:1092:38)
| at emitNone (events.js:86:13)
| at TLSSocket.emit (events.js:185:7)
| at TLSSocket._finishInit (_tls_wrap.js:610:8)
| at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:440:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]

@MichaelTsengLZ
Copy link

@OsvaldoRosado
It happened to one of my services.
We cannot see any app insights on the Azure portal. The error is bellow.

ApplicationInsights:Sender [ { Error: unable to verify the first certificate
      at Error (native)
      at TLSSocket.<anonymous> (_tls_wrap.js:1092:38)
      at emitNone (events.js:86:13)
      at TLSSocket.emit (events.js:185:7)
      at TLSSocket._finishInit (_tls_wrap.js:609:8)
      at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:439:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]

@OsvaldoRosado
Copy link
Member

Thanks for the reports! I've raised this issue internally for investigation. This error indicates an issue connecting with our backend (in particular the SDK not being able to trust the backend's SSL certificate).

For now the only action to do on your applications is to make sure the SDK's disk-backed retries are enabled (this is the default in v1.x). This will allow the SDK to retry sending this telemetry at a time when the backend is able to receive it.

@MarkHerhold
Copy link

@OsvaldoRosado any update on this issue?

@OsvaldoRosado
Copy link
Member

OsvaldoRosado commented May 18, 2018

My understanding is that an intermittent backend issue has been identified and a long-term fix is being planned. That being said, I'd love to collect some more data on this.

If those who see this could mention if they have any data loss (when disk-backed retries are enabled), how frequently they see the error, and if there is any regularity as to when it occurs, that would be great!

I'm also considering reducing the severity of this error message when retries are enabled, as it is purely informational in this case. Ideally, I think this should only start complaining at error severity if telemetry couldn't be sent some N consecutive times - indicating more than a transient and automatically recovered error. This change would remove this log message (unless verbose logging is enabled) for any cases where the telemetry did eventually make it to the backend, I'm open to thoughts on this as well!

Re-opening the issue.

@OsvaldoRosado OsvaldoRosado reopened this May 18, 2018
@jkrinsky
Copy link

Hello, we see this error on most of our azure app services and we had to setUseDiskRetryCaching to false because that was also throwing storage errors. :/

@OsvaldoRosado
Copy link
Member

OsvaldoRosado commented May 23, 2018

@jkrinsky Could you please open a new issue for the storage errors? Definitely sounds like a bug!

Assuming they're issues with the ACL folder protection for Windows that was added in 1.0.2, you can forcefully disable it with:

client.channel._sender.OS_PROVIDES_FILE_PROTECTION = true;
client.channel._sender.USE_ICACLS = false;

But be aware that you're running a bit riskier with this disabled (which is why it's hard to disable), as the SDK can't ensure other user accounts on the box are restricted from reading the telemetry stored to disk for retry.

@yoadsn
Copy link

yoadsn commented May 23, 2018

@OsvaldoRosado We report events from Azure Functions - no retries there I assume since it's a rather short-lived process?
(I see the errors, but the events do not end up reported..)
This is really a reliability issue for us. (We do BPM on top of App Insights events).

@OsvaldoRosado
Copy link
Member

@yoadsn I don't have any great solutions right now if you can't use retries. You can of course enable retries but this might cause your functions to run longer than you like (and depending on the details of the bug found by @jkrinsky you might also need the config overrides I posted above to get the retries to work).

You can use the Flush API to manually send telemetry to get programmatically informed that telemetry sending failed. Eg.

client.flush({ callback: (serverResponse) => {
    if (serverResponse.indexOf('UNABLE_TO_VERIFY_LEAF_SIGNATURE')) {
        // Failed!. Try to resend telemetry?
    }
}});

But admittedly this seems rather obtuse. I do know the backend team is working to resolve the root cause - which would remove any need for these workarounds.

@OsvaldoRosado
Copy link
Member

Version 1.0.3 of this SDK has now been released. It includes some changes that might help with this problem.

  1. A fix for the ACL errors on App Services / Azure Functions that prevented retries from working.
  2. Logging for these connection errors now only appears by default if retries are disabled or if connecting to the backend fails 5 consecutive times. The intention is that you're only warned about things if there's an actual risk of telemetry loss rather than just transient issues. Those who have enabled verbose logging will continue the current behavior of recorded errors on each connection failure. In every case, these connection errors now give more helpful information in addition to the raw networking error from Node to help explain what's happening.

@OsvaldoRosado
Copy link
Member

Closing this for now due to a lack of reports after 1.0.3's adjustment to how this situation is handled. Please feel free to reopen if things still don't seem right!

@ghost
Copy link

ghost commented Feb 1, 2019

I am noticing the following issue in v1.0.7, Any suggestion on how to overcome this?
I am running the application inside a docker out of an Azure VM..

ApplicationInsights:Sender [ 'Ingestion endpoint could not be reached. This batch of telemetry items has been lost. Use Disk Retry Caching to enable resending of failed telemetry. Error:',
{ Error: unable to verify the first certificate
at TLSSocket. (_tls_wrap.js:1103:38)
at ZoneDelegate.invokeTask (/node_modules/zone.js/dist/zone-node.js:275:35)
at Zone.runTask (/node_modules/zone.js/dist/zone-node.js:151:47)
at TLSSocket.ZoneTask.invoke (/node_modules/zone.js/dist/zone-node.js:345:33)
at emitNone (events.js:106:13)
at TLSSocket.emit (events.js:208:7)
at TLSSocket._finishInit (_tls_wrap.js:637:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:467:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]

@OsvaldoRosado
Copy link
Member

OsvaldoRosado commented Feb 1, 2019

It's expected for the backend to have transient failures. As long as you don't see this all of the time, things are working properly.

As the error message states, it's highly encouraged for you to enable disk retry caching to ensure these transient backend failures do not result in lost telemetry. You can do that by using setUseDiskRetryCaching(true) (or simply not changing the default, which is true).

EDIT: As an additional note, when disk retry caching is on, transient errors like these will not be presented to you unless the SDK detects a prolonged problem reaching the backend.

@ghost
Copy link

ghost commented Feb 1, 2019

thanks @OsvaldoRosado, but this issue seems to be occurring all the time.. and after some amount of time, it take down the entire application.. not sure why this occurs.. it is easily reproducible..

@OsvaldoRosado
Copy link
Member

@sharath-srinivasan If you remove SDK from your application does it still crash? Network failures should never be fatal. Does the crash come with a stack trace?

As for the persistent SSL errors:

  • Does your docker container (or VM) run behind any kind of proxy?
  • Can you reproduce the connection failures locally? (without docker if possible)
  • Do the failures still happen with a minimal code sample (like below), or do they only occur when the SDK is in your app?
const appInsights = require("applicationinsights");
appInsights.setup('ikey').start();
appInsights.defaultClient.trackEvent({name: "test event"}});
appInsights.defaultClient.flush();

Can you also provide what version of Node you're using?

@jeevacodepro
Copy link

jeevacodepro commented Mar 22, 2019

I am seeing same error in the AppCenter build for Xamarin projects.

`##[section]Starting: Analyze build log

Task : Command Line
Description : Run a command line with arguments
Version : 1.1.3
Author : Microsoft Corporation
Help : More Information

[command]/usr/local/bin/node /Users/vsts/agent/2.148.2/scripts/build-logs-analyzer/node_modules/@build/logs-analyzer-build-script/script/bin.js *** 4385d67d-30d5-416b-b1a0-8c701d438151 Android Xamarin
ApplicationInsights:Sender [ { Error: unable to verify the first certificate
at Error (native)
at TLSSocket. (_tls_wrap.js:1092:38)
at emitNone (events.js:86:13)
at TLSSocket.emit (events.js:185:7)
at TLSSocket._finishInit (_tls_wrap.js:609:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:439:38) code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE' } ]
##[section]Finishing: Analyze build log
##[section]Starting: Checkout
`

@markwolff
Copy link
Contributor

markwolff commented Mar 22, 2019

I've raised this issue internally; a fix is in progress for the endpoint server and should be rolled out soon. No exact ETA to report, but I'll update if by a few weeks if nothing changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests