New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Silos no longer stable after upgrading to v3.6.2 #7973
Comments
Which version of Orleans were you using prior to 3.6.2? |
3.5.0 |
Did you update any packages other than Orleans in the transition? |
Just our own internal nugets and anything that caused package downgrades. |
I wonder what is causing this. Do you see log messages complaining of long delays? You might see improved stability by enabling these two options: siloBuilder.Configure<ClusterMembershipOptions>(options =>
{
options.ExtendProbeTimeoutDuringDegradation = true;
options.EnableIndirectProbes = true;
}) |
All I see in the log a few seconds after the grain starts processing work, which than as we know will cause eviction which in turn kubernetes will destroy the pod Orleans.Networking.Shared.SocketConnectionException Orleans.Runtime.OrleansMessageRejectionException |
Which version of .NET are you using? Is it .NET 6? If not, upgrading may help you in the event that you are indeed seeing crippling ThreadPool starvation. Also, what are the settings on your k8s pods: do you have CPU requests/limits set? By the way, please do try the configuration above. We have since made it the default and will likely make it the default in 3.x at some point |
NET6 and NET6-Windows, I'll start posting our startup code and deployment code. |
Our silobuilder if (o.EnableCluster || o.EnableSilo)
{
bool useKubeHosting = false;
var clusterConfig = o.OrleansClusterConfiguration;
webHostBuilder.UseOrleans(siloBuilder =>
{
if (o.EnableCluster && o.EnableDevelopmentCluster == false)
{
#if(NET6_0_OR_GREATER)
if (!string.IsNullOrEmpty(System.Environment.GetEnvironmentVariable(KubernetesHostingOptions.PodNamespaceEnvironmentVariable)))
{
siloBuilder.UseKubernetesHosting();
useKubeHosting = true;
}
#endif
switch (clusterConfig.ConnectionConfig.AdoNetConstant.ToLower())
{
case "system.data.sqlclient":
siloBuilder.UseAdoNetClustering(options =>
{
options.Invariant = clusterConfig.ConnectionConfig.AdoNetConstant;
options.ConnectionString = clusterConfig.ConnectionConfig.ConnectionString;
})
.UseAdoNetReminderService(options =>
{
options.Invariant = clusterConfig.ReminderConfigs[0].AdoNetConstant;
options.ConnectionString = clusterConfig.ReminderConfigs[0].ConnectionString;
})
.AddAdoNetGrainStorage(clusterConfig.StorageConfigs[0].Name, options =>
{
options.Invariant = clusterConfig.StorageConfigs[0].AdoNetConstant;
options.ConnectionString = clusterConfig.StorageConfigs[0].ConnectionString;
});
break;
case "azurecosmostable":
siloBuilder.UseAzureStorageClustering(options =>
{
#if NETCOREAPP3_1
options.ConnectionString = clusterConfig.ConnectionConfig.ConnectionString;
#else
options.ConfigureTableServiceClient(clusterConfig.ConnectionConfig.ConnectionString);
#endif
})
.UseAzureTableReminderService(options =>
{
#if NETCOREAPP3_1
options.ConnectionString = clusterConfig.ReminderConfigs[0].ConnectionString;
#else
options.ConfigureTableServiceClient(clusterConfig.ReminderConfigs[0].ConnectionString);
#endif
})
.AddAzureTableGrainStorage(clusterConfig.StorageConfigs[0].Name, options =>
{
#if NETCOREAPP3_1
options.ConnectionString = clusterConfig.StorageConfigs[0].ConnectionString;
#else
options.ConfigureTableServiceClient(clusterConfig.StorageConfigs[0].ConnectionString);
#endif
});
break;
}
}
else if(o.EnableDevelopmentCluster)
{
siloBuilder.UseDevelopmentClustering(options =>
{
var address =
clusterConfig.PrimarySiloAddress.Split(new[] { ':' }, StringSplitOptions.RemoveEmptyEntries);
options.PrimarySiloEndpoint = new IPEndPoint(IPAddress.Parse(address[0]), Convert.ToInt32(address[1]));
}).UseInMemoryReminderService()
.AddMemoryGrainStorage("GrainStorage");
}
siloBuilder
.ConfigureLogging((hostingContext, logging) =>
{
logging.AddConsole();
logging.AddDebug();
if (!string.IsNullOrEmpty(telemetryKey))
{
logging.AddApplicationInsights(telemetryKey);
}
logging.AddSerilog();
})
.Configure<ClusterOptions>(options =>
{
if (!useKubeHosting)
{
options.ClusterId = clusterConfig.ClusterOptions.ClusterId;
options.ServiceId = GenerateServiceId(clusterConfig.ClusterOptions.ServiceId);
}
});
if (o.OrleansClusterConfiguration.EndPointOptions.AdvertisedIPAddress?.GetAddressBytes()?.Length > 0)
{
siloBuilder.ConfigureEndpoints(o.OrleansClusterConfiguration.EndPointOptions.AdvertisedIPAddress, GenerateSiloPortNumber(clusterConfig.EndPointOptions.SiloPort), GenerateGatewayPortNumber(clusterConfig.EndPointOptions.GatewayPort));
}
else
{
siloBuilder.ConfigureEndpoints(GenerateSiloPortNumber(clusterConfig.EndPointOptions.SiloPort), GenerateGatewayPortNumber(clusterConfig.EndPointOptions.GatewayPort));
}
if (Environment.OSVersion.Platform == PlatformID.Win32NT)
{
siloBuilder.UsePerfCounterEnvironmentStatistics();
}
else
{
siloBuilder.UseLinuxEnvironmentStatistics();
}
siloBuilder.Configure<ClusterMembershipOptions>(options =>
{
options.ExtendProbeTimeoutDuringDegradation = true;
options.EnableIndirectProbes = true;
});
siloBuilder.Configure<SiloMessagingOptions>(options =>
{
options.ResponseTimeout = TimeSpan.FromMinutes(30);
options.SystemResponseTimeout = TimeSpan.FromMinutes(30);
});
if (o.GrainAssemblies != null)
{
o.GrainAssemblies.BindConfiguration(config);
siloBuilder.ConfigureApplicationParts(o.GrainAssemblies.DefineApplicationParts);
}
siloBuilder.Configure<SerializationProviderOptions>(options =>
{
options.SerializationProviders.Add(typeof(Orleans.Serialization.ProtobufSerializer));
});
#if(NET5_0_OR_GREATER)
if (o.OrleansClusterConfiguration?.EnableDashboard == true)
{
siloBuilder.UseDashboard(o =>
{
o.HostSelf = false;
o.HideTrace = true;
});
siloBuilder.UseDashboardEmbeddedFiles();
}
#endif
}); |
Deployment
|
You said you have multiple silo processes in each container, but you're also using the Kubernetes hosting package, is that right? I wonder how that should work - it's not the scenario that package is designed for (which is one silo per pod) |
These timeout lengths are concerning. What prompted such long timeouts? siloBuilder.Configure<SiloMessagingOptions>(options =>
{
options.ResponseTimeout = TimeSpan.FromMinutes(30);
options.SystemResponseTimeout = TimeSpan.FromMinutes(30);
}); I don't suppose you're able to profile & share traces of your pods while they are running, eg by collecting traces using |
I just added Kubernetes hosting package to see if would help, right now I'm pulling straws because this behavior has come out of nowhere and we can't roll back the changes back to a previous container as this update required backend database changes. that said let me explain our setup a little clearer. Our product is designed to run client managed hardware as Windows Services or in AKS. therefore we manipulate startup and load in Grain Assemblies based on configuration
with this in-place we can deploy a single .exe to our client which will load all the grains in all assemblies or in AKS we can take the same container and split work across 3 deployments. |
I'm not seeing anything in the diff between 3.5.0 and 3.6.2 which might have caused this. Please try the aforementioned configuration parameters as an attempt to prevent this and give you some respite while we work together to investigate the root cause (which is most likely caused by some kind of thread pool starvation, if I were to guess, but we cannot conclude that without seeing CPU profiling traces or log messages). Please also remove the Kubernetes integration for now, since it's not suitable for this use case. |
we have a function that aggregates up 10's of thousands of documents (pdf, word, excel etc..) and combines than into a single document. when we rolled out this feature and our clients started using it. we started noticing message response timeouts as some of these jobs would take 20-30 minutes. extending the timeout resolved that issue. we have since moved on and implemented Orleans.SyncWorkers so we could perhaps bring the timeouts back down. |
Great! By the way, what timezone are you in and would you prefer to diagnose this on a call? |
I'll remove the Kubernetes integration. |
eastern timezone, I can send a team meeting |
what's the best way for me to send you a teams meeting request? I really appreciate you lending a helping hand. |
rebond is my alias |
invite sent, I hope it works for you |
@DocIT-Official let's try again when you're available |
Hello, |
Background:
Been running Orleans in our production SAAS platform for a few years now, we have 8 silo's however 4 out of 8 are the same docker container just some different labels in AKS deployment to load different grain assemblies. the reason for this is that we have real-time OCR processing and that we want that work to run in their own PODS. this has been working with great success for over 2 years. however we upgrade all our nuget packages to v3.6.2 and now we are getting hundreds of pod restarts because pods stop responding to heartbeat while processing work which is causing work to be aborted, I'm looking for some guidance as this behavior is only observed once deployed to AKS, all our integration tests pass and nothing is showing up in insights to make us believe there any unhandled exceptions
The text was updated successfully, but these errors were encountered: