Skip to content
This repository has been archived by the owner. It is now read-only.

UvException (Error -4047 EPIPE broken pipe) timing out HTTP requests #1182

Closed
davenewza opened this issue Oct 26, 2016 · 56 comments

Comments

@davenewza
Copy link

commented Oct 26, 2016

The following exception is being logged by Kestrel irregularly when performing HTTP requests to our application hosted in an Azure Web App. The request ultimately times out with a 502 at the client with the response:

The specified CGI application encountered an error and the server terminated the process.

It appears that the server goes through periods of working perfectly fine, but then periods where this issue will repeatedly occur. It is not consistent.

Exception

Logger name: Microsoft.AspNetCore.Server.Kestrel
Log level: Error
State: TcpListenerPrimary.OnConnection
Exception:

Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4047 EPIPE broken pipe
    at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.Libuv.Check(Int32 statusCode)
    at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvWriteReq.Write2(UvStreamHandle handle, ArraySegment`1 bufs, UvStreamHandle sendHandle, Action`4 callback, Object state)
    at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.ListenerPrimary.DispatchConnection(UvStreamHandle socket)
    at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.TcpListenerPrimary.OnConnection(UvStreamHandle listenSocket, Int32 status)

project.json

{
  "webroot": "wwwroot",
  "dependencies": {
    "Microsoft.AspNetCore.Mvc": "1.0.1",
    "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
    "Microsoft.AspNetCore.Diagnostics": "1.0.0",
    "Microsoft.AspNetCore.StaticFiles": "1.0.0",
    "Microsoft.AspNetCore.Mvc.Cors": "1.0.1",
    "Microsoft.AspNetCore.Mvc.Core": "1.0.1",
    "Microsoft.AspNetCore.Mvc.Abstractions": "1.0.1",
    "Microsoft.AspNetCore.Http": "1.0.0",
    "Microsoft.AspNetCore.Http.Abstractions": "1.0.0",
    "Microsoft.Extensions.Configuration": "1.0.0",
    "Microsoft.Extensions.Configuration.Json": "1.0.0",
    "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
    "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
    "Microsoft.Extensions.Logging": "1.0.0",
    "Microsoft.Extensions.Logging.Abstractions": "1.0.0",
    "Microsoft.Extensions.Logging.Console": "1.0.0",
    "Microsoft.Extensions.PlatformAbstractions": "1.0.0",
    "Microsoft.AspNetCore.Authentication.JwtBearer": "1.0.0",
    "Microsoft.AspNetCore.Authorization": "1.0.0",
    "Microsoft.ApplicationInsights.AspNetCore": "1.0.2",
    "Microsoft.AspNetCore.Mvc.Formatters.Json": "1.0.1",
    "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",
    "Microsoft.AspNetCore.Cors": "1.0.0",
    "Newtonsoft.Json": "9.0.1",
    "RestSharp": "105.2.3",
    "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
    "IdentityModel": "1.11.0"
  },
  "frameworks": {
    "net461": {}
  },
  "tools": {
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": {
      "version": "1.0.0-preview2-final"
    }
  },
  "buildOptions": {
    "emitEntryPoint": true,
    "preserveCompilationContext": true,
    "warningsAsErrors": false
  },
  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },
  "scripts": {
    "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
  },
  "publishOptions": {
    "include": [
      "wwwroot",
      "appsettings.json",
      "config.json",
      "config.development.json",
      "config.test.json",
      "config.production.json",
      "web.config"
    ]
  }
}

web.config:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.web>
    <httpRuntime 
      executionTimeout="300000" 
      maxRequestLength="512000" />
  </system.web>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified"/>
    </handlers>
    <security>
      <requestFiltering>
        <requestLimits maxAllowedContentLength="524288000"/>
      </requestFiltering>
    </security>
    <aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" forwardWindowsAuthToken="false"/>
  </system.webServer>
</configuration>

I have been through this MSDN article, but with no luck.

@ThomasArdal

This comment has been minimized.

Copy link

commented Oct 26, 2016

Same as #1179?

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 26, 2016

@ThomasArdal Not entirely it seems. I have this issue with Microsoft.AspNetCore.Server.Kestrel 1.0.1 whereas you seemed to have the problem with 1.0.0 only. Now you are experiencing "-4089 EAFNOSUPPORT address family not supported" after upgrading to 1.0.1?

@ThomasArdal

This comment has been minimized.

Copy link

commented Oct 26, 2016

@davenewza Got the 4047 3 hours ago (after upgrading to 1.0.1). I think it's the same. Still relates to 404's in my case.

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 27, 2016

@ThomasArdal The 4047s time out the HTTP requests on my side.

Not sure if this means anything to someone, but here is the entry in the web server logs for said requests: https://gist.github.com/davenewza/a9ed0d796b17fb2ac34de9413c14db5c

@ThomasArdal

This comment has been minimized.

Copy link

commented Oct 27, 2016

@davenewza I see the same error in my logs.

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 27, 2016

To make sure it isn't something on my side, I've set up an availability ping using Application Insight:

image

@Rinsen

This comment has been minimized.

Copy link

commented Oct 27, 2016

I receive loads of logs containing the exact same problem and it all started to show up last Saturday at 2016-10-22 19:27:39.7301205 +00:00 and then I have a steady flow of these and a terribly slow service.

I was on a vacation that weekend and some days before so no deploys were made during this time and have not been made after this started to show up but I still receive loads of them.

@halter73

This comment has been minimized.

Copy link
Member

commented Oct 27, 2016

Hi everyone. There appears to be something that changed Web App environment that we're looking into.

In the meantime you should be able to work around this issue temporarily by setting Kestrel's thread count to 1 in your call to UseKestrel(). E.g:

var host = new WebHostBuilder()
    .UseKestrel(options =>
    {
        options.ThreadCount = 1;
    })
    .UseContentRoot(Directory.GetCurrentDirectory())
    .UseIISIntegration()
    .UseStartup<Startup>()
    .Build();
@halter73

This comment has been minimized.

Copy link
Member

commented Oct 28, 2016

@davenewza Do you have a date and time for when the request logged in https://gist.github.com/davenewza/a9ed0d796b17fb2ac34de9413c14db5c took place?

@imperugo

This comment has been minimized.

Copy link

commented Oct 28, 2016

Thx @halter73
What is the impact of ThreadCount = 1?
How does it affect the application in terms of performance, reliability, response time and so on?

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 28, 2016

@halter73 The first occurrence of this happened at 2016-10-22T20:40:00.7093608Z. Very similar to what @Rinsen has reported.

Willing to make the ThreadCountchange, but we need to know how this could effect performance and such.

@Rinsen

This comment has been minimized.

Copy link

commented Oct 28, 2016

By changing ThreadCount to 1 I don´t get any new errors in my logs @halter73 .

But I agree with @imperugo and @davenewza , what is the impact from this change?

@imperugo

This comment has been minimized.

Copy link

commented Oct 28, 2016

Just another info about the problem. If I restart my Azure App Service the error disappears. After few hours (6/8 hours or more) it comes back.

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 28, 2016

Likewise. The issue resolved after rereleasing. Holding thumbs that it doesn't return.

image

@imperugo

This comment has been minimized.

Copy link

commented Oct 28, 2016

@davenewza we did the same yesterday and, after the restart, it was good. at 8 PM (UTC +2) the errors were back.

@halter73

This comment has been minimized.

Copy link
Member

commented Oct 28, 2016

The ThreadCount change basically reduces the number of threads Kestrel has available for ita IO operations. The error you're seeing happens when IO operations are being handed off to "secondary" threads.

The default Kestrel ThreadCount is half the number of logical CPU cores, and setting this to one can hurt performance if your application is bottlenecked reading and writing request/response data.

@halter73

This comment has been minimized.

Copy link
Member

commented Oct 28, 2016

@davenewza Did you make the ThreadCount change when you rereleased?

@npnelson

This comment has been minimized.

Copy link

commented Oct 28, 2016

I also experienced this issue on Azure App Service. Everything was running great for weeks and all of a sudden earlier this week, I started getting these errors. It appears to be correlated with when the .NET 1.0.1 runtime bits were added to my Azure App Service image. My apps automatically rolled forward to the 1.0.1 runtime as outlined here:

https://blogs.msdn.microsoft.com/dotnet/2016/09/13/announcing-september-2016-updates-for-net-core-1-0/

I followed the procedure to force it back to the 1.0.0 runtime and the errors have gone away.

I also decided that I should add the CoreCLR version to my logging contexts so I can tell if this gets switched out from under me again, but I can't find out how to get the CoreCLR version that my app is running under (see my question here: http://stackoverflow.com/questions/40297192/how-to-determine-version-of-coreclr-runtime-from-within-application)

If the automatic roll forward policy to 1.0.1 proves to be the culprit, I think that policy needs to be reconsidered. I was on a wild goose chase for days trying to figure out what changed (on the bright side, I have a much deeper understanding of all the parts now). It was just a little bit of luck that I made the connection to the Azure App Service rollout of 1.0.1.

@cesarblum

This comment has been minimized.

Copy link
Contributor

commented Oct 29, 2016

@npnelson @davenewza Are you guys seeing this issue only when the server is under high load?

@npnelson

This comment has been minimized.

Copy link

commented Oct 29, 2016

@cesarbs I can only give you a guess that it's not strictly high load. I have all of my Web Apps running on the same 2 S3 instances (4 cores, 7GB RAM). The Azure portal shows my CPU usage around 13% and memory at 70% utilization. High load means different things to different people, but I'd have to say that I would be surprised if I got more than 10 requests per second total at peak.

There is a chance, that it might be more likely to happen when I get 2 or 3 requests in a second where I might normally only get 1 request a second if that.

I haven't rolled all of my apps back to 1.0.0 yet, so here is a recent look:

28 Oct 2016 20:18:23.251TcpListenerPrimary.OnConnection
28 Oct 2016 20:03:19.524TcpListenerPrimary.OnConnection
28 Oct 2016 19:38:17.655 TcpListenerPrimary.OnConnection
28 Oct 2016 19:03:14.778 TcpListenerPrimary.OnConnection
28 Oct 2016 19:03:14.778 TcpListenerPrimary.OnConnection
28 Oct 2016 19:03:08.377 TcpListenerPrimary.OnConnection
28 Oct 2016 19:01:26.722 TcpListenerPrimary.OnConnection
28 Oct 2016 18:59:43.727 TcpListenerPrimary.OnConnection
28 Oct 2016 18:38:13.231 TcpListenerPrimary.OnConnection
28 Oct 2016 17:23:29.811 TcpListenerPrimary.OnConnection
28 Oct 2016 17:23:29.764 TcpListenerPrimary.OnConnection

It does seem to be a when it rains, it pours kind of thing, because before I rolled some of my apps back to the 1.0.0 runtime, this would be a typical distribution:

28 Oct 2016 14:07:30.600 TcpListenerPrimary.OnConnection
28 Oct 2016 14:07:24.178 TcpListenerPrimary.OnConnection
28 Oct 2016 14:07:11.882 TcpListenerPrimary.OnConnection
28 Oct 2016 14:07:02.757 TcpListenerPrimary.OnConnection
28 Oct 2016 14:07:02.148 TcpListenerPrimary.OnConnection
28 Oct 2016 14:06:32.557 TcpListenerPrimary.OnConnection
28 Oct 2016 14:06:19.367 TcpListenerPrimary.OnConnection
28 Oct 2016 14:06:03.442 TcpListenerPrimary.OnConnection
28 Oct 2016 14:02:23.828 TcpListenerPrimary.OnConnection
28 Oct 2016 14:02:23.683 TcpListenerPrimary.OnConnection
28 Oct 2016 13:57:32.594 TcpListenerPrimary.OnConnection
28 Oct 2016 13:57:23.304 TcpListenerPrimary.OnConnection

I haven't tried this, but you might be able to reproduce it by running a 1.0.1 runtime app on an Azure S3 App Service instance and then just posting some bytes (in one of my apps that has the problem, I take the bytes posted to me in the body, write them to TableStorage and then place a queue message on ServiceBus). Maybe you could throw a await Task.Delay(1000) to simulate that work. Then make sure you are hitting the server from multiple clients and let it run for a few hours.

Unfortunately, I can't reproduce it at will, but it is definitely there.

@npnelson

This comment has been minimized.

Copy link

commented Oct 29, 2016

Forget about what I said about rolling back to 1.0.0 from 1.0.1. I checked my logs this morning and the errors were back. However, I have some additional findings.

The earliest occurrence of the error I see in my logs is 10/25/2016 17:34:46 Eastern

I updated my site at 10/28/2016 17:07:31 Eastern. Everything was fine for a little over 3 hours, and then the first error popped up at 20:18, then another at 23:33 and then starting at 00:18:44 on 10/29, they started happening every 10 minutes almost to the second most of the time. Below is a small sample of the -4047 errors for just this app, I can provide more logs if needed.

image

@Rinsen

This comment has been minimized.

Copy link

commented Oct 29, 2016

I have an experimental setup with a relatively low static load and I have not seen any issues since I changed thread count so far.

@davidfowl

This comment has been minimized.

Copy link
Member

commented Oct 29, 2016

Changing the thread count to 1 should avoid the problem completely because the code path with the exception never runs in that case. We just need to figure out why it's so seemingly random... If we had a consistent repro it would be easier to look at.

@Rinsen

This comment has been minimized.

Copy link

commented Oct 29, 2016

The app causing my errors are a very small experimental prototype app with a few custom middlewares and not much more than some EF backend, any good ideas to work with @davidfowl?

@Rinsen

This comment has been minimized.

Copy link

commented Oct 29, 2016

@CesarBS No, my load is consistent and mostly static only with about 1000 - 2000 or so request per 24 hours.

@davenewza

This comment has been minimized.

Copy link
Author

commented Oct 30, 2016

@halter73 Nope, I did not make the ThreadCount change.

@CesarBS This problem was occurring just the same on light load.

@npnelson

This comment has been minimized.

Copy link

commented Oct 30, 2016

@CesarBS I obsessed all weekend trying to pin down the magic formula to reproduce this and still coming up short. However, our applications aren't as busy on the weekends and I was able to make more sense of the logs.

I deployed a bare bones webapi application (i.e. using jwtbearer authentication instead of openidconnect) to a new single S3 instance and also deployed it to our existing 2 instance S3 group. I can't reproduce the problem on the single S3 instance, but I will get the error on the 2 instance S3 group that has other Apps on it. I have the Always On setting set to On (which appears to simply issue a HTTP GET every five minutes).

Saturday night, I restarted all the apps and after two hours, I would usually see an error every other Always On request (i.e. every 10 minutes I would see an error). Sunday morning, I restarted again. It took about 10.5 hours for the errors to resurface, but this time they are less frequent than before.

Also interesting is that I NEVER see the error for our front end web app, which uses OpenIDConnect as opposed to JWTBearer and on the weekend the front end app is just as idle as two of our WebAPI apps that did experience the error during the Always On pings.

Sorry to keep throwing stuff out there without being able to reproduce it, but maybe it will trigger some helpful thoughts.

@razzemans

This comment has been minimized.

Copy link

commented Oct 31, 2016

Just wanted to mention that we're seeing the same error. The thing is, we had the -4089 error until 10/26/2016, 2:07:01 AM. They completely disappeared after that (no deploys done!) After that, we started seeing the -4047 errors, the first occurrence at 10/26/2016, 8:53:37 PM (GMT+1 timezones). We've had about 1.5K since that time.

Our load isn't high (around 1000 reqs/hour) and haven't made the ThreadCount change yet.

@halter73 halter73 added 3 - Done and removed 2 - Working labels Nov 18, 2016
@halter73

This comment has been minimized.

Copy link
Member

commented Nov 18, 2016

We found the service that was opening the Kestrel named pipes when running as on Azure Web Apps. We are looking into reconfiguring the service not to open these named pipes, but I don't have an exact timeline for when that change will be deployed.

In the meantime, 1.1.0 is out on NuGet.org. If you upgrade and redeploy that should resolve this issue.

@halter73 halter73 closed this Nov 18, 2016
@runxc1

This comment has been minimized.

Copy link

commented Nov 23, 2016

I am using Kestrel 1.1.0 and just found this thread as I was trying to find out what these errors are from. Below is what I am seeing in the logs

2016-11-22 23:31:30.784 +00:00 [Warning] Unable to bind to http://localhost:1062 on the IPv6 loopback interface.
System.AggregateException: One or more errors occurred. (Error -4089 EAFNOSUPPORT address family not supported) ---> Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4089 EAFNOSUPPORT address family not supported
at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.Libuv.tcp_bind(UvTcpHandle handle, SockAddr& addr, Int32 flags)
at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvTcpHandle.Bind(ServerAddress address)
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.TcpListenerPrimary.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.Listener.b__8_0(Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.ListenerPrimary.d__12.MoveNext()
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
at System.Threading.Tasks.Task.Wait()
at Microsoft.AspNetCore.Server.Kestrel.Internal.KestrelEngine.CreateServer(ServerAddress address)
at Microsoft.AspNetCore.Server.Kestrel.KestrelServer.Start[TContext](IHttpApplication`1 application)
---> (Inner Exception #0) Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4089 EAFNOSUPPORT address family not supported
at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.Libuv.tcp_bind(UvTcpHandle handle, SockAddr& addr, Int32 flags)
at Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvTcpHandle.Bind(ServerAddress address)
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.TcpListenerPrimary.CreateListenSocket()
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.Listener.b__8_0(Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.ListenerPrimary.d__12.MoveNext()<---

@halter73

This comment has been minimized.

Copy link
Member

commented Nov 23, 2016

@runxc1 This is more closely related to #1001 and aspnet/IISIntegration#239. Rest assured this is a benign, but certainly scary-looking, warning indicating that the Azure Web Apps environment doesn't support listening to the IPv6 loopback interface.

@muratg Did we make any progress on the conversation about getting IPv6 supported? Should aspnet/IISIntegration#239 be kept open to track the issue until it's resolved?

@Tratcher

This comment has been minimized.

Copy link
Member

commented Nov 23, 2016

Consider reopening #1001, not aspnet/IISIntegration#239.

@halter73

This comment has been minimized.

Copy link
Member

commented Nov 23, 2016

Why? The reason we closed #1001 in the first place is because we opened aspnet/IISIntegration#239. I don't see anything actionable for Kestrel. The warning makes sense, but if Web Apps doesn't update in a reasonable time frame, IISIntegration should configure Kestrel to listen on 127.0.0.1 instead of localhost.

@Sharpeli

This comment has been minimized.

Copy link

commented Nov 30, 2016

Is it certain that the issue will be resolved after upgrade Kestrel to 1.1.0 and redeploy it ? Has someone tested that ?

@davenewza

This comment has been minimized.

Copy link
Author

commented Nov 30, 2016

Nope. I'm keeping with options.ThreadCount = 1 until it is certain.

@npnelson

This comment has been minimized.

Copy link

commented Nov 30, 2016

I have upgraded everything to the 1.1.0 train and haven't had any problems since (and yes, I did experience the issue in the 1.0.1 train)

@ThomasArdal

This comment has been minimized.

Copy link

commented Nov 30, 2016

Same here. Experienced the error on Azure with 1.0.0 and 1.0.1. After upgrade to 1.1.0, I haven't seen the problem.

@halter73

This comment has been minimized.

Copy link
Member

commented Nov 30, 2016

FWIW I tested this when I implemented the fix. I was able to repro this issue on a few of my own Azure Web Apps, so it was pretty simple to verify.

@imperugo

This comment has been minimized.

Copy link

commented Dec 2, 2016

Just pushed on production the 1.1 without ThreadCount = 1 and now I'm getting

ListenerPrimary.ReadCallback

Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4095 EOF end of file
  at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.ListenerPrimary.PipeReadContext.ReadCallback(UvStreamHandle dispatchPipe, Int32 status)
@halter73

This comment has been minimized.

Copy link
Member

commented Dec 5, 2016

@imperugo In 1.1, the "-4095 EOF end of file" error is fairly benign. It won't cause requests to fail with 502 as described in in the original bug report. The error should also be fairly infrequent (In my experience about once a day).

Still you shouldn't see any error with ThreadCount = 1. The ListenerPrimary class isn't even used in that case. Maybe the configuration is getting overridden somewhere?

@imperugo

This comment has been minimized.

Copy link

commented Dec 12, 2016

hi @halter73
I confirm it happens fairly infrequent (once a day or maybe two). The configuration is pretty normale, nothing special just using SSL.

With ThreadCount = 1 it never happen.

@bangonkali

This comment has been minimized.

Copy link

commented Feb 5, 2017

I experienced the same issue. It seems it happens on the server when I put Kestrel behind Nginx with SSL. On the dev machines, this never happen.

@mmartain

This comment has been minimized.

Copy link

commented May 29, 2017

Also have the issue with the Error -4095 EOF end of file about once a day.

Using this behind reverse proxy on IIS

@fedoranimus

This comment has been minimized.

Copy link

commented Jun 5, 2017

I, too, am seeing this issue when behind a reverse-proxy Nginx (with SSL), but never experience it in development.

@mmartain

This comment has been minimized.

Copy link

commented Jun 5, 2017

I think we can conclude it has an impact when behind reverse proxy. Maybe mention that i too am using SSL.

@RehanSaeed

This comment has been minimized.

Copy link

commented Jul 4, 2017

Am still getting this error while on the latest 1.1.2 release. The error is occurring about once per hour when posting a fairly large number of 10KB JSON messages. I'm using IIS as a reverse proxy. #1814 seems to be a duplicate.

System.AggregateException: One or more errors occurred. ---> 
System.IO.IOException: Error -4077 ECONNRESET connection reset by peer ---> Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4077 ECONNRESET connection reset by peer
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.SocketInput.CheckConnectionError()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.SocketInputExtensions.<PeekAsyncAwaited>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.MessageBody.ForContentLength.<PeekAsyncAwaited>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.MessageBody.<ReadAsyncAwaited>d__12.MoveNext()
   --- End of inner exception stack trace ---
   at System.IO.Compression.DeflateStream.EndRead(IAsyncResult asyncResult)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncTrimPromise`1.Complete(TInstance thisRef, Func`3 endMethod, IAsyncResult asyncResult, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.WebUtilities.FileBufferingReadStream.<ReadAsync>d__38.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.IO.StreamReader.<ReadBufferAsync>d__97.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.IO.StreamReader.<ReadToEndAsyncInternal>d__62.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Bridge.Logging.AspNetCore.RequestResponseLoggingAttribute.<GetContentValue>d__10.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Bridge.Logging.AspNetCore.RequestResponseLoggingAttribute.<HandleResult>d__14.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Bridge.Logging.AspNetCore.RequestResponseLoggingAttribute.<OnResultExecutionAsync>d__7.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.<InvokeAsync>d__20.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Builder.RouterMiddleware.<Invoke>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.RequestDecompression.RequestDecompressionMiddleware.<Invoke>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware`1.<Invoke>d__18.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware`1.<Invoke>d__18.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Server.IISIntegration.IISMiddleware.<Invoke>d__8.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Hosting.Internal.RequestServicesContainerMiddleware.<Invoke>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.Frame`1.<RequestProcessingAsync>d__2.MoveNext()
---> (Inner Exception #0) System.IO.IOException: Error -4077 ECONNRESET connection reset by peer ---> 
Microsoft.AspNetCore.Server.Kestrel.Internal.Networking.UvException: Error -4077 ECONNRESET connection reset by peer
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.SocketInput.CheckConnectionError()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.SocketInputExtensions.<PeekAsyncAwaited>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.MessageBody.ForContentLength.<PeekAsyncAwaited>d__4.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ValueTaskAwaiter`1.GetResult()
   at Microsoft.AspNetCore.Server.Kestrel.Internal.Http.MessageBody.<ReadAsyncAwaited>d__12.MoveNext()<---
@davidfowl

This comment has been minimized.

Copy link
Member

commented Jul 5, 2017

@RehanSaeed can you open a new bug. The error you're getting doesn't have anything to do with the original exception.

@RehanSaeed

This comment has been minimized.

Copy link

commented Jul 5, 2017

@davidfowl #1814 is already open and has the same issue.

@niemyjski

This comment has been minimized.

Copy link

commented Nov 27, 2017

I'm seeing this in our app which we just deployed to production. 5 times in the last 24 hours using latest stable (2.0.3) on .netcoreapp on azure using iis feature.

Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvException: Error -4095 EOF end of file
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.LibuvFunctions.ThrowError(System.Int32 statusCode)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.ListenerPrimary.PipeReadContext.ReadCallback(Microsoft.AspNetCore.Server.Kestrel.Transport.Libuv.Internal.Networking.UvStreamHandle dispatchPipe, System.Int32 status)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
You can’t perform that action at this time.