Skip to content
This repository has been archived by the owner on Aug 20, 2022. It is now read-only.

Failed to establish a connection. Error: Failed to get routing manager deployment status #103

Closed
filariow opened this issue Jan 11, 2021 · 31 comments

Comments

@filariow
Copy link

Describe the bug
When starting a microservice, either in isolation or not, the bridge to kubernetes extension for VSCode returns the following error.
On the cluster side we notice the creation of the envoy proxies containers for each microservice and the successful creation of the ingresses for the isolation.

Application Description:

  • nginx api gateway
  • 4 microservices
  • 1 front-end web app based on ASP.NET Core and Vue.JS

Logs
Bridge to Kubernetes extension output:

Routing successfully enabled for service through pod 'fra-notification-5c97b5db5d-fztmx' in namespace 'dev'.
Pod 'fra-notification-5c97b5db5d-fztmx' created in namespace 'dev'.
Found container 'notification' in pod 'fra-notification-5c97b5db5d-fztmx'.
Preparing to run Bridge To Kubernetes configured as service dev/notification ...
Failed to get routing manager deployment status
Stopping workload and cleaning up...
Restore: Pod 'fra-notification-5c97b5db5d-fztmx' deleted.
Failed to establish a connection. Error: Failed to get routing manager deployment status

Bridge to Kubernetes extesions logs:

2021-01-11T22:44:44.347Z | Common Extension Root    | TRACE | Event: binaries-utility-try-get-binaries-success
2021-01-11T22:44:44.348Z | Common Extension Root    | TRACE | Event: connect-service-task-terminal-start-connect <json>{"serviceName":"notification","ports":"50001","isolateAs":"filario"}</json>
2021-01-11T22:44:44.348Z | Common Extension Root    | TRACE | Event: kubectl-client-command <json>{"args":"config view --minify -o json"}</json>
2021-01-11T22:44:44.426Z | Common Extension Root    | TRACE | Event: kubectl-client-command-success <json>{"args":"config view --minify -o json"}</json>
2021-01-11T22:44:44.427Z | Common Extension Root    | TRACE | Event: binaries-utility-try-get-binaries-success
2021-01-11T22:44:44.427Z | Common Extension Root    | TRACE | Event: binaries-utility-try-get-binaries-success
2021-01-11T22:44:44.427Z | Common Extension Root    | TRACE | Event: kubectl-client-command <json>{"args":"config view --minify -o json"}</json>
2021-01-11T22:44:44.508Z | Common Extension Root    | TRACE | Event: kubectl-client-command-success <json>{"args":"config view --minify -o json"}</json>
2021-01-11T22:44:44.508Z | Connect (PGA/fa970826-b8 | TRACE | Event: connect-start-connect <json>{"service":"notification","ports":"50001","isolateAs":"filario","os":"linux"}</json>
2021-01-11T22:44:44.509Z | Common Extension Root    | TRACE | Event: kubectl-client-command <json>{"args":"get namespaces -o jsonpath={.items[*].metadata.name}"}</json>
2021-01-11T22:44:44.956Z | Common Extension Root    | TRACE | Event: kubectl-client-command-success <json>{"args":"get namespaces -o jsonpath={.items[*].metadata.name}"}</json>
2021-01-11T22:44:44.957Z | Connect (PGA/fa970826-b8 | TRACE | Event: connect-validate-non-dev-spaces-cluster <json>{"isDevSpacesCluster":false}</json>
2021-01-11T22:44:47.694Z | Common Extension Root    | TRACE | Event: cli-client-prep-connect-success
2021-01-11T22:46:03.300Z | Common Extension Root    | ERROR | Error: cli-client-connect-error <stack>Error: Failed to get routing manager deployment status\n\n    at ChildProcess.<anonymous> (/home/fra/.vscode/extensions/mindaro.mindaro-1.0.120201211/dist/extension.js:3:140460)\n    at ChildProcess.emit (events.js:223:5)\n    at ChildProcess.EventEmitter.emit (domain.js:475:20)\n    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)</stack>
2021-01-11T22:46:03.303Z | Connect (PGA/fa970826-b8 | ERROR | Error: connect-error <stack>Error: Failed to get routing manager deployment status\n\n    at ChildProcess.<anonymous> (/home/fra/.vscode/extensions/mindaro.mindaro-1.0.120201211/dist/extension.js:3:140460)\n    at ChildProcess.emit (events.js:223:5)\n    at ChildProcess.EventEmitter.emit (domain.js:475:20)\n    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)</stack>
2021-01-11T22:46:03.307Z | Common Extension Root    | ERROR | Error: connect-service-task-terminal-error <stack>Error: Failed to establish a connection. Error: Failed to get routing manager deployment status\n\n    at /home/fra/.vscode/extensions/mindaro.mindaro-1.0.120201211/dist/extension.js:3:72115\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:94:5)</stack>

Environment Details

  1. Windows
VS Code
Version: 1.52.1 (system setup)
Commit: ea3859d4ba2f3e577a159bc91e3074c5d85c0523
Date: 2020-12-16T16:34:46.910Z
Electron: 9.3.5
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Windows_NT x64 10.0.19042

mindaro.mindaro@1.0.120201211
  1. Linux (Manjaro)
VS Code
Version: 1.52.1
Commit: ea3859d4ba2f3e577a159bc91e3074c5d85c0523
Date: 2020-12-16T16:32:10.090Z
Electron: 9.3.5
Chrome: 83.0.4103.122
Node.js: 12.14.1
V8: 8.3.110.13-electron.0
OS: Linux x64 5.10.2-2-MANJARO

mindaro.mindaro@1.0.120201211
@ciacco85
Copy link

ciacco85 commented Jan 11, 2021

#28 already seen this issue without finding useful hints
😢 development stuck again for the whole team! from devspaces to b2k another blocking impediment!

any advice for a temporary backup solution while finding fix/workaround for situation like this?

@daniv-msft
Copy link
Collaborator

Thank you @filariow and @ciacco85 for reporting this issue, and sorry you're facing it. I confirm I'm reproducing it locally as well.
Adding @pragyamehta who is investigating this on our side.

@pragyamehta
Copy link
Contributor

Hi @ciacco85 and @filariow Thanks for reaching out to us. We are aware of an issue that manifests as "Failed to get routing manager deployment status" and we will be rolling out a fix for this in VS Code tomorrow.

I want to make sure that the issue you are seeing is the one that we are fixing. To validate that, could you please attach logs from the below locations and send them to bridgetokubernetes@microsoft.com ? Kindly send a link to this issue as well so that it is easier to track. Also, please let us know if you are using VS or VSCode.

  1. Bridge to Kubernetes logs:
    For Windows: %TEMP%/Bridge to Kubernetes
    For OSX/Linux: $TMPDIR/Bridge to Kubernetes
    If you are a Visual Studio user, please also provide these logs: %temp%\Microsoft.VisualStudio.Kubernetes.Debugging
  2. Routing manager logs:
    kubectl -n logs routingmanager-deployment-xxxx > routingmanagerlogs.txt

We will take a look and unblock you either way. Thanks!

@filariow
Copy link
Author

filariow commented Jan 12, 2021

Thank you very much @pragyamehta. I sent the email right now.

@pragyamehta
Copy link
Contributor

Hi @filariow thanks for sending the logs. I have confirmed that the issue you are seeing should get fixed with tomorrow's release. Please try it out and let us know if it works for you. Thanks!

@filariow
Copy link
Author

Ok @pragyamehta, I'll check tomorrow and let you know.

@curtstate
Copy link

curtstate commented Jan 12, 2021

@pragyamehta I rather suddenly (after debugging working fine 1hr earlier in aks) started to see a "Failed to get routing manager deployment status" in visual studio yesterday.
How would I know if its the same issue ?

I'm seeing this in the bridge logs:

2021-01-12T17:27:48.2843809Z | Library | TRACE | Using kubectl found at: 'c:\users\USERNAME\appdata\local\microsoft\visualstudio\16.0_9179140b\extensions\fx4nyvrt.emg\ServiceHub\kubectl\win\kubectl.exe'
2021-01-12T17:27:48.2943879Z | Library | TRACE | Invoked long running kubectl PortForward command: '--kubeconfig="C:\Users\USERNAME\AppData\Local\Temp\tmp98A7.tmp" port-forward service/routingmanager-service --pod-running-timeout=1s 57852:80 --namespace dev'
2021-01-12T17:27:49.7094458Z | Library | TRACE | Port forward to routing manager output : 'Forwarding from 127.0.0.1:57852 -> 80'
2021-01-12T17:27:49.7098775Z | Library | TRACE | Port forward to routing manager output : 'Forwarding from [::1]:57852 -> 80'
2021-01-12T17:27:49.7138102Z | Library | TRACE | Port forward to routing manager output : 'Handling connection for 57852'
2021-01-12T17:27:50.1974578Z | Library | WARNG | Error during port forward to routing manager : E0112 09:27:50.195222   14576 portforward.go:400] an error occurred forwarding 57852 -> 80: error forwarding port 80 to pod 98b052fe032005ee38f4e6afde8eca3d2ac419f26fc78d528f6cef72a60c0c19, uid : exit status 1: 2021/01/12 17:27:50 socat[8671] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused
2021-01-12T17:27:50.1990012Z | Library | WARNG | Routing manager's GetStatus threw HttpRequestException.
2021-01-12T17:27:50.2053452Z | Library | TRACE | Dependency: Kubernetes <json>{"target":"PortForward","success":true,"duration":null,"properties":{"error":"E0112 09:27:50.195222   14576 portforward.go:400] an error occurred forwarding 57852 -> 80: error forwarding port 80 to pod 98b052fe032005ee38f4e6afde8eca3d2ac419f26fc78d528f6cef72a60c0c19, uid : exit status 1: 2021/01/12 17:27:50 socat[8671] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused","exitCode":"0"}}</json>
2021-01-12T17:27:50.2289805Z | Library | WARNG | Logging handled exception: System.Net.Http.HttpRequestException: {"StackTrace":"   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpConnectionPool.SendWithNtConnectionAuthAsync(HttpConnection connection, HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)\r\n   at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_2.<<GetStatusAsync>b__5>d.MoveNext()","Message":"An error occurred while sending the request.","Data":{},"InnerException":{"ClassName":"System.IO.IOException","Message":"The response ended prematurely.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"   at System.Net.Http.HttpConnection.FillAsync()\r\n   at System.Net.Http.HttpConnection.ReadNextResponseHeaderLineAsync(Boolean foldedHeadersAllowed)\r\n   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146232800,"Source":"System.Net.Http","WatsonBuckets":null},"HelpLink":null,"Source":"System.Net.Http","HResult":-2146232800}
2021-01-12T17:27:51.2404019Z | Library | WARNG | Error when trying to get status from routing manager
2021-01-12T17:27:51.2443573Z | Library | WARNG | Logging handled exception: System.OperationCanceledException: {"ClassName":"System.OperationCanceledException","Message":"The operation was canceled.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"   at System.Threading.CancellationToken.ThrowOperationCanceledException()\r\n   at Microsoft.DevSpaces.Common.Utilities.WebUtilities.RetryUntilTimeWithWaitAsync(Func`2 action, TimeSpan maxWaitTime, TimeSpan waitInterval, CancellationToken cancellationToken)\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_0.<<GetStatusAsync>b__2>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_3.<<GetStatusAsync>b__7>d.MoveNext()","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233029,"Source":"System.Private.CoreLib","WatsonBuckets":null}
2021-01-12T17:27:53.3326348Z | Library | TRACE | Dependency: Kubernetes <json>{"target":"GetFirstNamespacedPodWithLabelAsync","success":true,"duration":null,"properties":{}}</json>
2021-01-12T17:27:53.3452024Z | Library | TRACE | Invoking kubectl PortForward command: --kubeconfig="C:\Users\USERNAME\AppData\Local\Temp\tmpAC6E.tmp" port-forward service/routingmanager-service --pod-running-timeout=1s 57858:80 --namespace dev
2021-01-12T17:27:53.3456244Z | Library | TRACE | Using kubectl found at: 'c:\users\USERNAME\appdata\local\microsoft\visualstudio\16.0_9179140b\extensions\fx4nyvrt.emg\ServiceHub\kubectl\win\kubectl.exe'
2021-01-12T17:27:53.3551755Z | Library | TRACE | Invoked long running kubectl PortForward command: '--kubeconfig="C:\Users\USERNAME\AppData\Local\Temp\tmpAC6E.tmp" port-forward service/routingmanager-service --pod-running-timeout=1s 57858:80 --namespace dev'
2021-01-12T17:27:54.9104615Z | Library | TRACE | Port forward to routing manager output : 'Forwarding from 127.0.0.1:57858 -> 80'
2021-01-12T17:27:54.9109003Z | Library | TRACE | Port forward to routing manager output : 'Forwarding from [::1]:57858 -> 80'
2021-01-12T17:27:54.9122560Z | Library | TRACE | Port forward to routing manager output : 'Handling connection for 57858'
2021-01-12T17:27:55.3529831Z | Library | WARNG | Error during port forward to routing manager : E0112 09:27:55.351661   20928 portforward.go:400] an error occurred forwarding 57858 -> 80: error forwarding port 80 to pod 98b052fe032005ee38f4e6afde8eca3d2ac419f26fc78d528f6cef72a60c0c19, uid : exit status 1: 2021/01/12 17:27:55 socat[8778] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused
2021-01-12T17:27:55.3537720Z | Library | WARNG | Routing manager's GetStatus threw HttpRequestException.
2021-01-12T17:27:55.3599740Z | Library | TRACE | Dependency: Kubernetes <json>{"target":"PortForward","success":true,"duration":null,"properties":{"error":"E0112 09:27:55.351661   20928 portforward.go:400] an error occurred forwarding 57858 -> 80: error forwarding port 80 to pod 98b052fe032005ee38f4e6afde8eca3d2ac419f26fc78d528f6cef72a60c0c19, uid : exit status 1: 2021/01/12 17:27:55 socat[8778] E connect(5, AF=2 127.0.0.1:80, 16): Connection refused","exitCode":"0"}}</json>
2021-01-12T17:27:55.3642929Z | Library | WARNG | Logging handled exception: System.Net.Http.HttpRequestException: {"StackTrace":"   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpConnectionPool.SendWithNtConnectionAuthAsync(HttpConnection connection, HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)\r\n   at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)\r\n   at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_2.<<GetStatusAsync>b__5>d.MoveNext()","Message":"An error occurred while sending the request.","Data":{},"InnerException":{"ClassName":"System.IO.IOException","Message":"The response ended prematurely.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"   at System.Net.Http.HttpConnection.FillAsync()\r\n   at System.Net.Http.HttpConnection.ReadNextResponseHeaderLineAsync(Boolean foldedHeadersAllowed)\r\n   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146232800,"Source":"System.Net.Http","WatsonBuckets":null},"HelpLink":null,"Source":"System.Net.Http","HResult":-2146232800}
2021-01-12T17:27:56.3772311Z | Library | WARNG | Error when trying to get status from routing manager
2021-01-12T17:27:56.3817065Z | Library | WARNG | Logging handled exception: System.OperationCanceledException: {"ClassName":"System.OperationCanceledException","Message":"The operation was canceled.","Data":null,"InnerException":null,"HelpURL":null,"StackTraceString":"   at System.Threading.CancellationToken.ThrowOperationCanceledException()\r\n   at Microsoft.DevSpaces.Common.Utilities.WebUtilities.RetryUntilTimeWithWaitAsync(Func`2 action, TimeSpan maxWaitTime, TimeSpan waitInterval, CancellationToken cancellationToken)\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_0.<<GetStatusAsync>b__2>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at Microsoft.DevSpaces.Library.ManagementClients.RoutingManagementClient.<>c__DisplayClass12_3.<<GetStatusAsync>b__7>d.MoveNext()","RemoteStackTraceString":null,"RemoteStackIndex":0,"ExceptionMethod":null,"HResult":-2146233029,"Source":"System.Private.CoreLib","WatsonBuckets":null}
2021-01-12T17:27:58.4716907Z | Library | TRACE | Dependency: Kubernetes <json>{"target":"GetFirstNamespacedPodWithLabelAsync","success":true,"duration":null,"properties":{}}</json>
2021-01-12T17:27:58.4826844Z | Library | TRACE | Invoking kubectl PortForward command: --kubeconfig="C:\Users\USERNAME\AppData\Local\Temp\tmpC084.tmp" port-forward service/routingmanager-service --pod-running-timeout=1s 57864:80 --namespace dev

@pragyamehta
Copy link
Contributor

@curtstate yes, I can confirm this is the same issue. We are rolling out a fix for this today. Please try again today afternoon/evening. Thanks!

@curtstate
Copy link

@pragyamehta Thanks for the update, I'll look forward to the fix. I spent a couple hours trying to figure out what's wrong.

If you don't mind sharing, what was the root cause ?

@pragyamehta
Copy link
Contributor

@curtstate From the logs, as you can see, we are trying to port forward to the routing manager to get information on whether it was able to setup isolation properly for your Bridge to Kubernetes session. For each B2K session, we spin up a new pod of routing manager and kill the previously running one (if any). My hunch is that port forwarding to the service is unable to pick the right pod (running pod) over the terminating one. We are releasing a fix to determine the correct pod to connect to and establish a port forward connection to that. We have noticed that port forwarding to the specific pod does not result in the above errors. Thanks!

@ciacco85
Copy link

@pragyamehta thanks for the explanation! I was interested, too.
Don't you mind sending here an update when you'll roll out the new release? we're really looking after it!
Thanks

@pragyamehta
Copy link
Contributor

@curtstate
Also, apologies for the confusion - I just noticed that you are a Visual Studio user. We are releasing the fix for this problem on VS Code today. Our fix for VS will be released sometime before end of this month. We are working hard to get the VS release out as soon as possible since we know this issue is blocking customers.
Can you let us know if this issue reproduces consistently for you?

@ciacco85 Sure, I can update this thread once we release in VSCode today.

@curtstate
Copy link

@pragyamehta

I really appreciate the details as it helps me understand what is going wrong and help me understand if its a problem on our side or not as our infrastructure teams often change things its not always clear where to go to try and fix the issue.

This is consistently blocking me in visual studio often but not every single time, until yesterday.

I like to understand the under-the-hood details of what's going on for better understanding at a technical level what's happening and so I know where to look when things break.

@ciacco85
Copy link

@curtstate

Also, apologies for the confusion - I just noticed that you are a Visual Studio user. We are releasing the fix for this problem on VS Code today. Our fix for VS will be released sometime before end of this month. We are working hard to get the VS release out as soon as possible since we know this issue is blocking customers.

Can you let us know if this issue reproduces consistently for you?

@ciacco85 Sure, I can update this thread once we release in VSCode today.

I've switched to vs code since the adoption of b2k. I was using vs with devspace
So waiting for the update 😉

@pragyamehta
Copy link
Contributor

@curtstate @ciacco85 We have released this on VSCode, please try it and let me know if you face any issues. I will keep you updated about releasing this fix on VS

@filariow
Copy link
Author

filariow commented Jan 12, 2021

It's seems to be working now! Thank you very much guys!
If it's ok for you, we'll tests this more profusely tomorrow and close the issue if no errors appears.

@curtstate
Copy link

@pragyamehta
I know you seem to already have answered this but just to be clear a separate fix will be released for Visual Studio ?

Is it the visual studio extension for bridge to kubernetes where the update will come ?

@pragyamehta
Copy link
Contributor

@curtstate yes we will be releasing a new version of VS extension. I will keep this thread updated once I have more info on the eta

@yorkie-m
Copy link

@pragyamehta please could you confirm the version number of the VS Code update. It's not showing up as an update how I would expect but I can see a version v1.0.20210112. Installing that specific version does not solve our problem and the error 'Failed to get routing manager deployment status' is still reported.

Thanks.

image

@pragyamehta
Copy link
Contributor

pragyamehta commented Jan 13, 2021

@yorkie-m yes the new version is v1.0.20210112. This should have auto upgraded for you, we are looking into fixing that part.
Could you share the latest library log files from %temp%\Bridge to Kubernetes or $TMPDIR/Bridge to Kubernetes. We would also like to take a look at the routing manager logs from the routing manager pod. It would be great if you could send these to bridgetokubernetes@microsoft.com with a link to this issue so that we can take a look.

@yorkie-m
Copy link

@pragyamehta I spotted another version of the VS Code Extension appearing a short while ago - v1.0.120210113.

I've updated to that version and can now connect successfully into our cluster. I'm just getting other developers on our team to verify and prove we can consistently connect but this appears to have fixed the issue. Many thanks for all your help with this. Looking forward to the Visual Studio fix being released.

@yorkie-m
Copy link

@pragyamehta the next developer to try this is still seeing the routing manager deployment status error. I've mailed the requested log files to yourself and bridgetokubernetes@microsoft.com. Thanks

@danreydel
Copy link

Hi,

I'm having a similar issue but I'm using Visual Studio 2019 together with the Bridge to Kubernetes extension v 2.0.20201216.1
Can it be that the issue is related to the one you already solved for VS Code?
Visual studio spends some time saying "connecting to service" and afterwards it appears an error message "Failed to get routing manager deployment status".
I'm using an AKS cluster with AGIC enabled.

Thanks in advance,

@curtstate
Copy link

curtstate commented Jan 13, 2021

@danreydel I asked this in a previous comment as I was experiencing the same thing and it will come in a separate fix in the visual studio extension later this month.

@curtstate
Copy link

@pragyamehta Any word on when the fix will make it to visual studio ?

@pragyamehta
Copy link
Contributor

@curtstate We will be releasing this sometime this week.
@danegsta would be able to provide more details.

@ciacco85
Copy link

will the new versions for VS be able to run multiple micro services togheder according to solution configuration?

@daniv-msft
Copy link
Collaborator

@ciacco85 Unfortunately, solutions won't be supported in the next Visual Studio release. Supporting solutions implies making deeper changes to how we store our Bridge configuration, and we don't have that yet.
However, we know it's something our VS users expect so it's definitely a change high in our backlog.

@danegsta
Copy link

@curtstate @danreydel an update to the Visual Studio Bridge to Kubernetes extension was just pushed to the Visual Studio marketplace with the latest fixes for routing isolation connection issues.

@HariAddepalli
Copy link

@danegsta thank you. I'm able to debug now on VS

@curtstate
Copy link

@danegsta, @pragyamehta Thanks, my co-workers and I are now able to debug in VS again.

Latest fix is looking good so far.

@amsoedal amsoedal closed this as completed Feb 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants