New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

502.3 bad gateway errors couple of hours after deployment #323

Closed
thisissubbuster opened this Issue Feb 10, 2017 · 9 comments

Comments

Projects
None yet
8 participants
@thisissubbuster

thisissubbuster commented Feb 10, 2017

I deployed an asp.net core 1.0 webapi service to Azure web app in Standard tier. It works fine for a couple of hours. But starts throwing 502.3 bad gateway errors after that. Restarting the web app again fixes it. But it again behaves the same after some time. We deploy the service from VSTS. Please help

@guardrex

This comment has been minimized.

Show comment
Hide comment
@guardrex

guardrex Feb 10, 2017

Contributor

Have you checked out related issues and comments like #311, #269, and #245 (comment)? You might have a different problem, but it's worth a read if you haven't seen them. Search the open & closed issues with "502.3" to get them to come up.

Contributor

guardrex commented Feb 10, 2017

Have you checked out related issues and comments like #311, #269, and #245 (comment)? You might have a different problem, but it's worth a read if you haven't seen them. Search the open & closed issues with "502.3" to get them to come up.

@thisissubbuster

This comment has been minimized.

Show comment
Hide comment
@thisissubbuster

thisissubbuster Feb 11, 2017

Checked them out but still having the same issue. Let me know if you want to have a look at my files

thisissubbuster commented Feb 11, 2017

Checked them out but still having the same issue. Let me know if you want to have a look at my files

@BrianVallelunga

This comment has been minimized.

Show comment
Hide comment
@BrianVallelunga

BrianVallelunga Feb 14, 2017

I'm also having severe issues with an ASP.NET Core 1.1 app running .NET Core on Azure. Everything will be running fine for hours, but then we'll suddenly have a huge spike in threads being used and the server will throw 502 errors.

Here's a screenshot of a good instance and a bad instance:

dotnet threads

Here's the graph of threads being used by dotnet.exe

threads

And the graph of what happens to our site when this occurs:

requests and errors

I took a memory dump of the w3wp and dotnet.exe processes if that will help diagnose the issues. I've read through most of the other threads linked but none of them has a definite solution that I can implement.

The site basically takes in form posts, transforms the data and posts messages onto Azure Service Bus. It also pulls back some data from Redis and returns it. Overall, it's quite basic.

I've seen references to some things being fixed in 1.1.1, but I haven't seen an ETA on when that will be released. I'm open to suggestions on immediate fixes.

BrianVallelunga commented Feb 14, 2017

I'm also having severe issues with an ASP.NET Core 1.1 app running .NET Core on Azure. Everything will be running fine for hours, but then we'll suddenly have a huge spike in threads being used and the server will throw 502 errors.

Here's a screenshot of a good instance and a bad instance:

dotnet threads

Here's the graph of threads being used by dotnet.exe

threads

And the graph of what happens to our site when this occurs:

requests and errors

I took a memory dump of the w3wp and dotnet.exe processes if that will help diagnose the issues. I've read through most of the other threads linked but none of them has a definite solution that I can implement.

The site basically takes in form posts, transforms the data and posts messages onto Azure Service Bus. It also pulls back some data from Redis and returns it. Overall, it's quite basic.

I've seen references to some things being fixed in 1.1.1, but I haven't seen an ETA on when that will be released. I'm open to suggestions on immediate fixes.

@Tratcher

This comment has been minimized.

Show comment
Hide comment
@Tratcher
Member

Tratcher commented Feb 14, 2017

@nemenos

This comment has been minimized.

Show comment
Hide comment
@nemenos

nemenos Feb 16, 2017

Yes, we have same issue and I need them fixed asap, or find a workaround.
Hoping it's the problem fixed in 1.1.1... ETA?

nemenos commented Feb 16, 2017

Yes, we have same issue and I need them fixed asap, or find a workaround.
Hoping it's the problem fixed in 1.1.1... ETA?

@halter73

This comment has been minimized.

Show comment
Hide comment
@halter73

halter73 Feb 16, 2017

Member

@BrianVallelung @nemenos The increasing number of threads might have been caused by a deadlock in Kestrel. The best way to verify this is to to take a memory dump and use VS's Debug Managed Memory Feature to take a look at the parallel stacks windows to see if it contains the same stacks as the screenshot in the linked Kestrel issue.

If the stacks do look the same. You can work around this issue by not using the ASP.NET Core to serve static files, and serve files directly using IIS as suggested here. This works, because in a typical app, the only thing that passes a CancellationToken to Response.WriteAsync() is the static file middleware. The deadlock will be fixed in 1.1.1 and in a 1.0.x backport which will be released soon.

Member

halter73 commented Feb 16, 2017

@BrianVallelung @nemenos The increasing number of threads might have been caused by a deadlock in Kestrel. The best way to verify this is to to take a memory dump and use VS's Debug Managed Memory Feature to take a look at the parallel stacks windows to see if it contains the same stacks as the screenshot in the linked Kestrel issue.

If the stacks do look the same. You can work around this issue by not using the ASP.NET Core to serve static files, and serve files directly using IIS as suggested here. This works, because in a typical app, the only thing that passes a CancellationToken to Response.WriteAsync() is the static file middleware. The deadlock will be fixed in 1.1.1 and in a 1.0.x backport which will be released soon.

@thisispaulsmith

This comment has been minimized.

Show comment
Hide comment
@thisispaulsmith

thisispaulsmith Feb 17, 2017

We're experiencing exactly the same issue. If 1.1.1 fixes it then it can't come soon enough.

thisispaulsmith commented Feb 17, 2017

We're experiencing exactly the same issue. If 1.1.1 fixes it then it can't come soon enough.

@cesarbs

This comment has been minimized.

Show comment
Hide comment
@cesarbs

cesarbs Mar 14, 2017

Contributor

1.1.1 is now out there. Anyone still experiencing the hang?

Contributor

cesarbs commented Mar 14, 2017

1.1.1 is now out there. Anyone still experiencing the hang?

@BrianVallelunga

This comment has been minimized.

Show comment
Hide comment
@BrianVallelunga

BrianVallelunga Apr 5, 2017

My issues seem mostly resolved. I ended up having to switch to the full framework due to needing Azure Service Bus support. We've been running 1.1.1 + ASP.NET Core + Full Framework for a couple of weeks without issue.

BrianVallelunga commented Apr 5, 2017

My issues seem mostly resolved. I ended up having to switch to the full framework due to needing Azure Service Bus support. We've been running 1.1.1 + ASP.NET Core + Full Framework for a couple of weeks without issue.

@cesarbs cesarbs closed this Apr 26, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment