Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Azure] WinRM timeout with Windows 2016-Datacenter Marketplace Image #8658

Closed
Dilergore opened this issue Jan 28, 2020 · 117 comments
Closed

[Azure] WinRM timeout with Windows 2016-Datacenter Marketplace Image #8658

Dilergore opened this issue Jan 28, 2020 · 117 comments
Labels
bug remote-plugin/azure stage/waiting-on-upstream This issue is waiting on an upstream change

Comments

@Dilergore
Copy link

Dilergore commented Jan 28, 2020

Please refer the end of this thread to see other users complaining that this is not working.
MicrosoftDocs/azure-docs#31188

Issue:

Started: December, 2019.
Packer cannot connect with WinRM to machines provisioned from Windows 2016 (2016-Datacenter) Marketplace image in Azure.

Further details:

WinRM timeout increase is not working. It seems the last image working is version: "14393.3326.1911120150" (Released 12th of Nov). It stopped working with "14393.3384.1912042333" (Released 10th of Dec).

This issue is only impacting 2016-Datacenter. 2019 is working properly.

To get image Details for a Region:

az vm image list --location northeurope --offer WindowsServer --publisher MicrosoftWindowsServer --sku 2016-Datacenter --all

URL to the Last Working Image:

https://support.microsoft.com/en-us/help/4525236/windows-10-update-kb4525236

URL to the Image where something went wrong:

https://support.microsoft.com/en-us/help/4530689/windows-10-update-kb4530689

Notes:

This is currently applying to North EU. I had no time to investigate in other regions but I believe the same images getting distributed to every region.

I am opening a Microsoft case and planning to update the thread with the progress.

@Dilergore
Copy link
Author

Interesting. It was definitely not working for quiet some time, but now I cannot reproduce this issue anymore. Even with the latest image and with the images between November and today it is working properly.

I will reopen in case I start to see this issue again.

@AliAllomani
Copy link

AliAllomani commented Jan 28, 2020

I still can reproduce the issue,

used image

   "image_publisher": "MicrosoftWindowsServer",
    "image_offer": "WindowsServer",
    "image_sku": "2016-Datacenter"

From initial troubleshooting it looks to me a certificate issue, trying to run winrm quickconfig on the machine during azure-arm: Waiting for WinRM to become available... resulting

WinRM service is already running on this machine.
WSManFault
    Message
        ProviderFault
            WSManFault
                Message = Cannot create a WinRM listener on HTTPS because this machine does not have an appropriate certificate. To be used for SSL, a certificate must have a CN matching the hostname, be appropriate for Server Authentication, and not be expired, revoked, or self-signed. 

Error number:  -2144108267 0x80338115
Cannot create a WinRM listener on HTTPS because this machine does not have an appropriate certificate. To be used for SSL, a certificate must have a CN matching the hostname, be appropriate for Server Authentication, and not be expired, revoked, or self-signed. 

And when trying to connect using openssl to retrieve the certificate i'm getting errno=54

openssl s_client -connect 13.95.122.54:5986 -showcerts
CONNECTED(00000003)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 307 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : 0000
    Session-ID:
    Session-ID-ctx:
    Master-Key:
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1580229460
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---

Trying to re-generate self-signed certificate and reconfigure WinRM causing packer to immediately respond to the connection

$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "$env:COMPUTERNAME"
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force
Stop-Service winrm
Start-Service winrm

and from openssl showcerts i'm getting a correct answer

 openssl s_client -connect 13.95.122.54:5986 -showcerts
CONNECTED(00000003)
depth=0 CN = pkrvm39jkvjspuk
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = pkrvm39jkvjspuk
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
 0 s:/CN=pkrvm39jkvjspuk
   i:/CN=pkrvm39jkvjspuk
-----BEGIN CERTIFICATE-----
MIIDKjCCAhKgAwIBAgIQbI6Ll/YdLKZFm3XIDuCVEzANBgkqhkiG9w0BAQsFADAa
MRgwFgYDVQQDDA9wa3J2bTM5amt2anNwdWswHhcNMjAwMTI4MTYzNDI4WhcNMjEw
MTI4MTY1NDI4WjAaMRgwFgYDVQQDDA9wa3J2bTM5amt2anNwdWswggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDTaBPCr8ImXt+wyDEcNVK3lW5HOme7X8h0
gl+ZTAmwhlzyZwWI1S5fW0Gfc+VQtwmscZs7in1/Rg0EBnhCHKiXYdJdWgiNQjp8
hxNHQlPzFMxBNHJCncs3cUjl8TBvWFVof+mNmv20IcoDfhkBXo8PBMC1M08krfGd
KXxvJ/Km3dfGvY3HKyMAdwJK/r4rENnTMIr5KgOv2cL4usTNS0o4nQSDVbL8rXdN
0Pfwui0ItGiZ7auul/tioQAmKpcxle7y16b/XnX1olQp59T7WklKcfS4Rt+XloAM
dyam22dhXaPQ9/03MBEqguO/SXDV2m+7RFLPRzHDPWwrQjE6eClDAgMBAAGjbDBq
MA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEw
GgYDVR0RBBMwEYIPcGtydm0zOWprdmpzcHVrMB0GA1UdDgQWBBQYK0o8mxc3uUyn
9WAvpOzINrvkyzANBgkqhkiG9w0BAQsFAAOCAQEALIRGvoQONxX0RzdyOEX15dJm
tMChjVgU9y176UK03NcuNqfQqJXhnibZQO/+ApXT4C1YKUzZcmqkJpPkt2ufYmC1
sFLp3tGZ35zfjtU8Mm6xEHdQv4LGQzpCycVqlvFGrdWCMCB4EWZb0z7oqp+nsz2P
14HFaiPsHnfpJEMUF+jrMQkGb9bzMHTT4Y0q5TStVdc9q1cu3pWLnzJ6gaBlz0Iz
DG03HtTmwppmDLSE1RZYJBQ6UsgD/L/jbR2c08ko4t1uSMwRcANv5sGZ6TukyK95
JVnYbFrZWzcqWfE1uynTEdeb+l/aospY9g/Fjt4WKI0U0xnGuczsbx1KoO0ELg==
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=pkrvm39jkvjspuk
issuer=/CN=pkrvm39jkvjspuk
---
No client certificate CA names sent
Peer signing digest: SHA256
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 1298 bytes and written 433 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 5E200000884A7231C92707E15CD2222B4BE94DD50A3B61E7B8763B3BC0A2F615
    Session-ID-ctx:
    Master-Key: 6CF4DA86AEBEB597F72DB9DC9E8C8B59D8B240C7FE6F8491B14314E86529A338F07E1B2C5BEB300C48DE4D490978D5D5
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1580229891
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate)
---

I see that packer is using azure osProfile.windowsConfiguration.winRM value in the template to configure winRM on the VM,

So here i would assume that either there is an issue with creating the certificate from packer side before uploading it to azure vault, or and issue with azure that prevents the VM from configuring winRM correctly using the values from the template, this may needs more troubleshooting.

 "osProfile": {
                    "computerName": "[parameters('virtualMachines_pkrvm2nb5asnu2s_name')]",
                    "adminUsername": "packer",
                    "windowsConfiguration": {
                        "provisionVMAgent": true,
                        "enableAutomaticUpdates": true,
                        "winRM": {
                            "listeners": [
                                {
                                    "protocol": "https",
                                    "certificateUrl": "https://pkrkv2nb5asnu2s.vault.azure.net/secrets/packerKeyVaultSecret/05113faa18ee40a2b5465910b2f3dda1"
                                }
                            ]
                        }
                    },
                    "secrets": [
                        {
                            "sourceVault": {
                                "id": "[parameters('vaults_pkrkv2nb5asnu2s_externalid')]"
                            },
                            "vaultCertificates": [
                                {
                                    "certificateUrl": "https://pkrkv2nb5asnu2s.vault.azure.net/secrets/packerKeyVaultSecret/05113faa18ee40a2b5465910b2f3dda1",
                                    "certificateStore": "My"
                                }
                            ]
                        }
                    ]
                },

@Dilergore Dilergore reopened this Jan 28, 2020
@Dilergore
Copy link
Author

Dilergore commented Jan 28, 2020

@AliAllomani Okay... Which region are you deploying to? Few weeks ago I was thinking that this is an image related issue, I had no time to investigate further. Today I tried to use an older image and it started to work so I opened this, but then tried with the latest as well and it was also working. Don't know what is going on.

Can you try with older versions as well? Also in WestUS2? Let's try to rule these out...

Reopened this for now, but you are on your own, because it is now working for me....

@AliAllomani
Copy link

@Dilergore I'm deploying to EU West, also faced the timeout issue with the latest windows 2019-Datacenter image but not sure if it's the same issue, will do more tests from my side on different images.

@Dilergore
Copy link
Author

@AliAllomani It was not happening for me with 2019. Usually it takes some time to configure WinRM by default. Using bigger machine, SSD, and increasing the timeout is usually working for this problem.

My setup is:
Timeout: 20 min
Premium SSD for OS Disk
D4s_v3

In my experience even with this sometimes it takes longer than 5-6 minutes to configure it and connect to it.

@AliAllomani
Copy link

AliAllomani commented Jan 29, 2020

@Dilergore it seems intermittent,

The common thing i find out :

  • it's usually taking up to 10 min to initially configure the VM and to be able to run powershell commands (even locally) for 2016, for 2019 sometime it's available immediately

  • in all cases, certificate common name is not correct, as packer is always creating common name with format {hostname}.cloudapp.azure.com, however for example the format for EU West should be {hostname}.westeurope.cloudapp.azure.com, but this is not an issue as we define "winrm_insecure": true

host := fmt.Sprintf("%s.cloudapp.net", c.tmpComputeName)

  • when it's not responding within 10 min it is always an issue with certificate : Encountered an internal error in the SSL library
    this appears when you try to connect to the instance at any time
> Test-WSMan -ComputerName 52.142.198.26 -UseSSL
Test-WSMan : <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="12175"
Machine="Bastion-UAT.wdprocessing.pvt"><f:Message>The server certificate on the destination computer
(52.142.198.26:5986) has the following errors:
Encountered an internal error in the SSL library.   </f:Message></f:WSManFault>
At line:1 char:1
+ Test-WSMan -ComputerName 52.142.198.26 -UseSSL
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (52.142.198.26:String) [Test-WSMan], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand
  • removing WinRM listener and re-create it manually fixes the issue(packer directly respond, and Test-WSMan shows the correct answer) , no need to re-generate or using different a certificate , test results below.

  • seems it's happening when using latest image more often than using "14393.3326.1911120150"

  • in windows events log i can see the below event when the issue is there

A fatal error occurred while creating a TLS client credential. The internal error state is 10013.

<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> 
- <System> 
<Provider Name="Schannel" Guid="{1F678132-5938-4686-9FDC-C8FF68F15C85}" /> 
<EventID>36871</EventID> 
<Version>0</Version> 
<Level>2</Level> 
<Task>0</Task> 
<Opcode>0</Opcode> 
<Keywords>0x8000000000000000</Keywords> 
<TimeCreated SystemTime="2020-01-29T12:25:18.377000300Z" /> 
<EventRecordID>767</EventRecordID> 
<Correlation ActivityID="{80B997BA-F1CA-0000-01F5-7D5E9AD6D501}" /> 
<Execution ProcessID="632" ThreadID="2352" /> 
<Channel>System</Channel> 
<Computer>pkrvmudjx20x9lp</Computer> 
<Security UserID="S-1-5-18" /> 
</System> 
- <EventData> 
<Data Name="Type">client</Data> 
<Data Name="ErrorState">10013</Data> 
</EventData> 
</Event>

occurrence tests done so far ( All in EU West ) :

Standard_F8s_v2 -  SSD - win2019 - image version : latest

15:50:16  ==> azure-arm: Getting the VM's IP address ...
15:50:16  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-ibnzhmks0m'
15:50:16  ==> azure-arm:  -> PublicIPAddressName : 'pkripibnzhmks0m'
15:50:16  ==> azure-arm:  -> NicName             : 'pkrniibnzhmks0m'
15:50:16  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
15:50:16  ==> azure-arm:  -> IP Address          : '40.68.191.187'
15:50:16  ==> azure-arm: Waiting for WinRM to become available...
15:50:16  ==> azure-arm: #< CLIXML
15:50:16      azure-arm: WinRM connected.

=======

Standard_F8s_v2 -  SSD - win2016 - image version : 14393.3326.1911120150
14:11:19  ==> azure-arm: Getting the VM's IP address ...
14:11:19  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-zhyjvoeajl'
14:11:19  ==> azure-arm:  -> PublicIPAddressName : 'pkripzhyjvoeajl'
14:11:19  ==> azure-arm:  -> NicName             : 'pkrnizhyjvoeajl'
14:11:19  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
14:11:19  ==> azure-arm:  -> IP Address          : '52.174.178.101'
14:11:19  ==> azure-arm: Waiting for WinRM to become available...
14:20:40  ==> azure-arm: #< CLIXML
14:20:40      azure-arm: WinRM connected.

================
Standard_B2ms - HDD - win2016 - image version : latest
12:13:08  ==> azure-arm: Getting the VM's IP address ...
12:13:08  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-wt2ndevwlv'
12:13:08  ==> azure-arm:  -> PublicIPAddressName : 'pkripwt2ndevwlv'
12:13:08  ==> azure-arm:  -> NicName             : 'pkrniwt2ndevwlv'
12:13:08  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
12:13:08  ==> azure-arm:  -> IP Address          : '52.148.254.62'
12:13:08  ==> azure-arm: Waiting for WinRM to become available...
12:43:00  ==> azure-arm: Timeout waiting for WinRM.
12:43:00  ==> azure-arm: 
12:43:00  ==> azure-arm: Cleanup requested, deleting resource group ...
12:49:52  ==> azure-arm: Resource group has been deleted.
12:49:52  Build 'azure-arm' errored: Timeout waiting for WinRM.
==============
Standard_D8s_v3 - HDD - win2016 - image version : latest

20:57:27  ==> azure-arm: Waiting for WinRM to become available...
21:06:19  ==> azure-arm: #< CLIXML
21:06:19      azure-arm: WinRM connected.
21:06:19  ==> azure-arm: <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>
21:06:19  ==> azure-arm: Connected to WinRM!
21:06:19  ==> azure-arm: Provisioning with Powershell...
===========
Standard_D8s_v3 - SSD - win2016 - image version : latest

21:17:12  ==> azure-arm: Getting the VM's IP address ...
21:17:12  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-vi2l6na2zy'
21:17:12  ==> azure-arm:  -> PublicIPAddressName : 'pkripvi2l6na2zy'
21:17:12  ==> azure-arm:  -> NicName             : 'pkrnivi2l6na2zy'
21:17:12  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
21:17:12  ==> azure-arm:  -> IP Address          : '168.63.109.42'
21:17:12  ==> azure-arm: Waiting for WinRM to become available...
21:47:20  ==> azure-arm: Timeout waiting for WinRM.
21:47:20  ==> azure-arm: 
21:47:20  ==> azure-arm: Cleanup requested, deleting resource group ...
==============================
Standard_D8s_v3 - SSD - win2016 - image version : latest

11:51:06  ==> azure-arm: Getting the VM's IP address ...
11:51:06  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-ksei5ia6c6'
11:51:06  ==> azure-arm:  -> PublicIPAddressName : 'pkripksei5ia6c6'
11:51:06  ==> azure-arm:  -> NicName             : 'pkrniksei5ia6c6'
11:51:06  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
11:51:06  ==> azure-arm:  -> IP Address          : '13.95.64.201'
11:51:06  ==> azure-arm: Waiting for WinRM to become available...
11:59:58      azure-arm: WinRM connected.
11:59:58  ==> azure-arm: #< CLIXML
==============================
Standard_D8s_v3 - SSD - win2016 - image version : 14393.3326.1911120150

21:56:07  ==> azure-arm: Getting the VM's IP address ...
21:56:07  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-6bz6fqr3js'
21:56:07  ==> azure-arm:  -> PublicIPAddressName : 'pkrip6bz6fqr3js'
21:56:07  ==> azure-arm:  -> NicName             : 'pkrni6bz6fqr3js'
21:56:07  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
21:56:07  ==> azure-arm:  -> IP Address          : '104.46.40.255'
21:56:07  ==> azure-arm: Waiting for WinRM to become available...
22:03:43  ==> azure-arm: #< CLIXML
22:03:43      azure-arm: WinRM connected.
22:03:43  ==> azure-arm: <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>
22:03:43  ==> azure-arm: Connected to WinRM!
22:03:43  ==> azure-arm: Provisioning with Powershell...

=========

Standard_F8s_v2 -  HDD - win2019 - image version : latest

16:19:50  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-wwwgtctyip'
16:19:50  ==> azure-arm:  -> PublicIPAddressName : 'pkripwwwgtctyip'
16:19:50  ==> azure-arm:  -> NicName             : 'pkrniwwwgtctyip'
16:19:50  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
16:19:50  ==> azure-arm:  -> IP Address          : '52.157.111.197'
16:19:50  ==> azure-arm: Waiting for WinRM to become available...
16:19:56  ==> azure-arm: #< CLIXML
16:19:56      azure-arm: WinRM connected.

========

Standard_B4ms - HDD - win2019 - image version : latest

16:03:00  ==> azure-arm:  -> DeploymentName    : 'pkrdp3ko5xlkk4n'
16:05:07  ==> azure-arm: Getting the VM's IP address ...
16:05:07  ==> azure-arm:  -> ResourceGroupName   : 'packer-Resource-Group-3ko5xlkk4n'
16:05:07  ==> azure-arm:  -> PublicIPAddressName : 'pkrip3ko5xlkk4n'
16:05:07  ==> azure-arm:  -> NicName             : 'pkrni3ko5xlkk4n'
16:05:07  ==> azure-arm:  -> Network Connection  : 'PublicEndpointInPrivateNetwork'
16:05:07  ==> azure-arm:  -> IP Address          : '52.166.196.146'
16:05:07  ==> azure-arm: Waiting for WinRM to become available...
16:34:59  ==> azure-arm: Timeout waiting for WinRM.
16:34:59  ==> azure-arm: 

Replacing the listener test :

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:\Users\packer> Test-WSMan -ComputerName localhost -UseSSL
Test-WSMan : <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="12175"
Machine="pkrvmwawvo84vka"><f:Message>The server certificate on the destination computer (localhost:5986) has the
following errors:
Encountered an internal error in the SSL library.   </f:Message></f:WSManFault>
At line:1 char:1
+ Test-WSMan -ComputerName localhost -UseSSL
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (localhost:String) [Test-WSMan], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand

PS C:\Users\packer> Get-ChildItem -path cert:\LocalMachine\My


   PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\My

Thumbprint                                Subject
----------                                -------
8DDC5709AB990B6AC7F8D8CF1B97FC5FA136B9C0  CN=pkrvmwawvo84vka.cloudapp.net


PS C:\Users\packer> Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
PS C:\Users\packer> Test-WSMan -ComputerName localhost -UseSSL
Test-WSMan : <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="2150858770"
Machine="pkrvmwawvo84vka"><f:Message>The client cannot connect to the destination specified in the request. Verify
that the service on the destination is running and is accepting requests. Consult the logs and documentation for the
WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service,
run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".
</f:Message></f:WSManFault>
At line:1 char:1
+ Test-WSMan -ComputerName localhost -UseSSL
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (localhost:String) [Test-WSMan], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand


PS C:\Users\packer> New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint 8DDC5709AB990B6AC7F8D8CF1B97FC5FA136B9C0 -Force


   WSManConfig: Microsoft.WSMan.Management\WSMan::localhost\Listener

Type            Keys                                Name
----            ----                                ----
Container       {Transport=HTTPS, Address=*}        Listener_1305953032


PS C:\Users\packer> Test-WSMan -ComputerName localhost -UseSSL
Test-WSMan : <f:WSManFault xmlns:f="http://schemas.microsoft.com/wbem/wsman/1/wsmanfault" Code="12175"
Machine="pkrvmwawvo84vka"><f:Message>The server certificate on the destination computer (localhost:5986) has the
following errors:
The SSL certificate is signed by an unknown certificate authority.
The SSL certificate contains a common name (CN) that does not match the hostname.     </f:Message></f:WSManFault>
At line:1 char:1
+ Test-WSMan -ComputerName localhost -UseSSL
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (localhost:String) [Test-WSMan], InvalidOperationException
    + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand

PS C:\Users\packer>

@adamrushuk
Copy link

Just wanted to add I also get intermittent WinRM timeouts using both 2012-R2-Datacenter and 2016-Datacenter in UK South. It seems worse on the 2012-R2-Datacenter builds.

I was using smalldisk image variants, but changed to using the standard ones with more disk available following previous advice.

I've also increased the WinRM timeout to 1 hour, and increased VM size to Standard_D4s_v3, to no avail.

@BruceShipman
Copy link

I've been having the same issues in US West 2 for the last couple of days: 2019-Datacenter builds are fine, but 2016-Datacenter and 2012-R2-Datacenter ones intermittently fail to connect via WinRM, with 2012-R2 being the most problematic. Builds are done using smalldisk image, initially with D2sV3 vm_size and 20 minute winrm_timeout values. Increasing the VM size or timeout doesn't show any perceptible improvement.

@Dilergore
Copy link
Author

I can fast track this with Microsoft but without the root cause... Also it totally seems working for me (for now), so I cannot even continue testing on my own. If you guys can find out what the issue is, I am happy to engage the support.

@ghost
Copy link

ghost commented Jan 29, 2020

I just started running into this problem today. The last two weeks I've been building images to test out an automated process using Packer and did not have any issues with WinRM. I'm running Packer on the Azure DevOps Hosted Agent windows-2019 targeting resource groups in the South Central US region using the 2016-Datacenter image. I ran three builds today without issue and at 2pm EST the build started to fail for WinRM timeout reasons. I'm using a Standard_DS4_v2 size VM so it is highly unlikely to be a resource constraint issue. The way it is behaving, I'm leaning towards a networking related issue in the Azure data center. I'm running a few tests now to try and provide some more useful details.

@AliAllomani
Copy link

From the my tests findings i’d assume that something is going wrong within the os during the auto winrm ssl configuration by azure vm template

@Dilergore i think there is no way currently available by packer to configure the builder vm to use non-ssl winrm ?

@Dilergore
Copy link
Author

Dilergore commented Jan 30, 2020

From the my tests findings i’d assume that something is going wrong within the os during the auto winrm ssl configuration by azure vm template

@Dilergore i think there is no way currently available by packer to configure the builder vm to use non-ssl winrm ?

https://www.packer.io/docs/communicators/winrm.html#winrm-communicator-options

Never tried it tho...

@AliAllomani
Copy link

@Dilergore The available parameters are to define the method that the communicator use, however on the builder side i see it's hardcoded

profile.WindowsConfiguration = &compute.WindowsConfiguration{
ProvisionVMAgent: to.BoolPtr(true),
WinRM: &compute.WinRMConfiguration{
Listeners: &[]compute.WinRMListener{
{
Protocol: "https",
CertificateURL: to.StringPtr(winRMCertificateUrl),
},
},
},

@nywilken nywilken self-assigned this Jan 30, 2020
@BruceShipman
Copy link

And today, just to muddy the water a bit...

Yesterday evening's (1800 GMT-8) pipeline failed due to WinRM timeout on all three builds - 2012 R2, 2016, and 2019. This morning's run (0400) ran correctly. This is the first WinRM timeout I've seen using the 2019-Datacenter source. All three builds use smalldisk, DS3v2, 60m WinRM timeout.

In addition, afternoon/evening builds have a much higher incidence of failure than early morning ones.

@tantra35
Copy link

tantra35 commented Jan 31, 2020

We have the similar issue, but this is imho doesn't depend on particular windows image, and we think that this is the issue with azure platform itself. For our case a little workaround is to change instance type from Standard_DS2_v2 to Standard_B2ms and vice versa

@nywilken
Copy link
Member

Hi Folks, thanks for keeping this thread up to date with your latest findings. I am looking into this issue on my end to see if there is any information that can help isolate what might be happening here. I too have observed that when using certain images connecting via winrm timeouts; changing my image to 2012-R2-Datacenter seems to work all the time within the westus region.

We have the similar issue, but this is imho doesn't depend on particular windows image, and we think that this is the issue with azure platform itself. For our case a little workaround is to change instance type from Standard_DS2_v2 to Standard_B2ms and vice versa

This is possible, but hard to tell with the information in the logs.

@Dilergore have you, or anyone on the thread, opened a support ticket with Azure around this particular issue?

@Dilergore
Copy link
Author

@nywilken i will open it during the weekend. Will involve some people who can help us / can route the ticket inside Microsoft. If you want to contribute please send me your mail address privately.

Thanks!

@chapter9
Copy link

chapter9 commented Feb 2, 2020

As noted in the Packer Documentation - Getting started/Build an image

A quick aside/warning:
Windows administrators in the know might be wondering why we haven't simply used a winrm quickconfig -q command in the script above, as this would automatically set up all of the required elements necessary for connecting over WinRM. Why all the extra effort to configure things manually?
Well, long and short, use of the winrm quickconfig -q command can sometimes cause the Packer build to fail shortly after the WinRM connection is established. How?

  1. Among other things, as well as setting up the listener for WinRM, the quickconfig command also configures the firewall to allow management messages to be sent over HTTP.
  1. This undoes the previous command in the script that configured the firewall to prevent this access.
  1. The upshot is that the system is configured and ready to accept WinRM connections earlier than intended.
  1. If Packer establishes its WinRM connection immediately after execution of the 'winrm quickconfig -q' command, the later commands within the script that restart the WinRM service will unceremoniously pull the rug out from under the connection.
  1. While Packer does a lot to ensure the stability of its connection in to your instance, this sort of abuse can prove to be too much and may cause your Packer build to stall irrecoverably or fail!

Unfortunately, while this is true on AWS using the userdata script, I'm not sure how Azure builder configures WINRM and perhaps run winrm quickconfig -q while packer attempts to connect on Azure. If yes, then it might be the cause of this

Also, take note that Packer documentation - Communicator/WINRM stills refers to winrm quick config -q and many other repo files also mention winrm quick config -q which could affect other builders and direct the community into this issue.

A successful workaround that I am using on AWS is to use SSH communicator on windows 2016/2019 installing SSH using the userdata following installation instructions provided by Microsoft Documentation or Microsoft openssh portable. Not sure how this would translate for Azure.

@AlexeyKarpushin
Copy link

Hi All,

Is there any updates on this? I can't repro the issue for a couple of days, probably some fix was rolled out?

@BruceShipman
Copy link

@AlexeyKarpushin - I definitely still have the issue.

It seems to have an Azure load component. I still get WInRM timeouts on all three platforms (2012R2, 2016, 2019.) Failure rates on the three builds is 60% or more during weekday business hours, about 10-20% weeknights and weekend day, and very rare weekend overnight. In addition, the 2019 builds have a much lower failure rate than 2016 and 2012R2.

@akingscote
Copy link

akingscote commented Feb 3, 2020

getting this issue intermittently on windows 2019 now as well. I suspect it may be as others have said, something to do with the time of day. Seems to work in mornings/evenings but not in core hours

"winrm_use_ssl": true,
"winrm_insecure": true,
"winrm_timeout": "10m",
"winrm_username": "packer",

"location": "uksouth",
"vm_size": "Standard_DS2_v2"

@nywilken
Copy link
Member

nywilken commented Feb 4, 2020

Hi folks, sorry for the slow response here. I have not been able to reproduce this issue since Friday although I do notice that 2016-Datacenter builds take longer than other Os versions to connect via WinRM. But I don't know why that is.

@nywilken i will open it during the weekend. Will involve some people who can help us / can route the ticket inside Microsoft. If you want to contribute please send me your mail address privately.

@Dilergore I don't have any new information to contribute so I'll refrain from reaching out privately. But thanks for offering to include me in the thread.

I definitely still have the issue.

For folks who are still able to reproduce the issue, when WinRM connectivity is timing out.

  • Can you telnet to the WinRM port(s) 5985 or 5986?
  • If you are able to connect via RDP are there any relevant errors in the event viewer ?

@itzikbekel
Copy link

same here, I was able to reproduce just now:
"location": "East US",
"image_offer": "WindowsServer",
"image_sku": "2019-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": true,
"winrm_insecure": true,
"winrm_timeout": "10m",

telnet to 5986 works, telnet to 5985 does not work.

10:02:18 �[1;32m==> azure-arm: Waiting for WinRM to become available...�[0m
10:12:25 �[1;31m==> azure-arm: Timeout waiting for WinRM.�[0m
10:12:25 �[1;32m==> azure-arm:
10:12:25 ==> azure-arm: Cleanup requested, deleting resource group ...�[0m
10:12:25 �[1;32m==> azure-arm:
10:12:25 ==> azure-arm: Not waiting for Resource Group delete as requested by user. Resource Group Name is packer-Resource-Group-z27ecnv9bw�[0m
10:12:25 �[1;31mBuild 'azure-arm' errored: Timeout waiting for WinRM.�[0m
10:12:25
10:12:25 ==> Some builds didn't complete successfully and had errors:
10:12:25 --> azure-arm: Timeout waiting for WinRM.

@AlexeyKarpushin
Copy link

Hi All,

I've created a workaround which allows our Azure DevOps pipelines to run. It doesn't solve the problem but allows to ignore it.. I can't paste the whole code here, but I can make a short description, hopefully it will be useful. The main idea is to re-create WinRM listener on the temp machine during Packer build.
Here are the steps:

  1. Enable Packer log: set $Env:PACKER_LOG=1 and $Env:PACKER_LOG_PATH='path to packer log'
  2. Create a simple parser which will analyze Packer log to find resource group name and temp VM name, and to detect WinRM issue. Error message in the log which indicates the issue is: "An existing connection was forcibly closed by the remote host"
    2.1 If you'll find a better solution on how to find a resource group, please share it!
  3. If the issue is detected, execute Invoke-AzVMRunCommand against the temp machine. This code will reconfigure WinRM listener:
$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName "$env:COMPUTERNAME"
Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse
New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force
  1. Run the resulting script as async powershell job before starting Packer build. Set some timeout before parsing the log to allow Packer to provision the temp machine, 10 min works fine for me. If you're doing it from YAML Azure DevOps pipeline, start the async job in the same step where Packer starts, otherwise the async job will be terminated by Azure DevOps client.

I hope the issue will be mitigated in the nearest future and this workaround will not be needed.

Kind regards,
Alexey

@shurick81
Copy link

I still have issues with building Windows Server 2019 (not 2016).

"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2019-Datacenter",
"image_version": "latest",

"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "30m",
"winrm_username": "packer",

"vm_size": "Standard_DS2_v2",
"managed_image_storage_account_type": "Premium_LRS",

region: west europe

@shurick81
Copy link

with the size you recommend packer has no issues with creating VMs:

"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2019-Datacenter-smalldisk",
"image_version": "latest",

"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "30m",
"winrm_username": "packer",

"vm_size": "Standard_D2_v2",
"managed_image_storage_account_type": "Standard_LRS",

However, builds became much slower with standard disks...

@faizan002
Copy link

Hi, I have this timeout issue which is quite sporadic but sometimes I cant create an image for a whole day and it works the next day. Lately I started to see the issue with "ssh timeout" for a ubuntu Image creation as well :( This is so frustrating.

@michaelmowry
Copy link

@danielsollondon isn't this issue simply that the certificate name is wrong as pointed out by @AliAllomani above. The certificate name expected and configured by packer is machineName.cloudapp.net vs. the actual machine name which is this format machineName.region.cloudapp.azure.com. Thoughts?

@nfischer2
Copy link

When creating images using MarketPlace images everything works great

"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2019-Datacenter",
"image_version": "latest",

"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "5m",
"winrm_username": "packer",

"vm_size": "Standard_D2_v2",

When trying to use an image from a Shared Gallery with the same VM size I am getting timeouts.

"os_type": "Windows",

      "shared_image_gallery": {
        "subscription": "{{user `subscription`}}",
        "resource_group": "RG",
        "gallery_name": "gallery",
        "image_name": "win2019",
        "image_version": "latest"
         },

"communicator": "winrm",
"winrm_use_ssl": "true",
"winrm_insecure": "true",
"winrm_timeout": "5m",
"winrm_username": "packer",

"vm_size": "Standard_D2_v2",

During the build I have opened an RDP session and confirmed that the listener, the certificate, and the firewall rule are present. I have confirmed that port 5986 does respond The image from the SIG does is configured for CIS compliance so the following settings are configured:

Allow Basic authentication is disabled
Allow unencrypted traffic is disabled
Disallow WinRM from storing RunAs credentials is Enabled

All other winrm service settings are set to Not configured.

As far as I can tell winrm should be working on this SIG image and I'm not sure if it's possible related to this issue or if there is a seemingly unrelated policy setting that is causing issues.

Any advice on this would be appreciated.

@nywilken
Copy link
Member

nywilken commented Mar 28, 2020

Hi folks, thanks for keeping this thread up to date with the latest finds and test results. The Packer team has been monitoring this issue closely to see if there is anything on the Packer side that can be changed to resolve the issue. In looking at the thread I see a possible cert domain change for self-signed certs and the change in vm_size (more of a user configuration change). I have since started a round of testing 10 managed disk builds (per tests), across West Us and East Us, in 15 minute intervals to test out the proposed solutions. My findings are as follows:

  1. v1.5.5 using Standard_D2_v2 resulted in a successful build every time.
  2. v1.5.5 patched to use <machinename>.<location>.cloudapp.azure.com using Standard_D2_v2 resulted in a succesful build every time.
  3. v1.5.5 using Standard_DS2_v2 timed out 60% of the time.
  4. v1.5.5 patched to use <machinename>.<location>.cloudapp.azure.com using Standard_DS2_v2 timed out 80% of the time (seems like a lot, so maybe be something else going on; will retest)

What I found is that by using Standard_D2_v2 I was always able to get a successful build regardless of the domain name used for the self-signed cert. I did find some Azure examples where the domain used for self-signed certs is <machinename-randomnumbers>.<location>.cloudapp.azure.com which is an easy change to make, but that doesn't seem to be the issue. Please let me know if you are seeing otherwise. I can push up a WIP PR with the change if that makes it easier for folks to test.

With that being said, if you're still running into issues here. Please make sure you are trying to create builds with the recommended vm_sizes #8658 (comment). If builds are still failing with the new vm sizes please attach your build configuration and debug logs via a Gist in a new comment. If someone is using the same configuration as you please just thumbs up their config to help us determine the best set of configs to test against. Thanks!

During the build I have opened an RDP session and confirmed that the listener, the certificate, and the firewall rule are present. I have confirmed that port 5986 does respond The image from the SIG does is configured for CIS compliance so the following settings are configured:

Hello @nfischer2 I suspect this may be another issue, possibily related to the custom image with some of CIS settings enabled.We've seen WinRM issues related to CIS in the past #6951. Have you had a chance to reach out to our mailing list or community forums for your issue? We have a bigger community that may have people who can answer your questions better than me. After reaching out to the community forums if you suspect that you are running into a bug please open a new issue and attach the related Packer debug logs PACKER_LOG=1 packer build template.json to see if there is any way we can help out.

@nfischer2
Copy link

Thanks for the reply. My issue was not related to this thread. After further troubleshooting I found that the registry setting 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\LocalAccountTokenFilterPolicy’ when enabled (set to 0) prevents winrm from opening a connection.

I wanted to provide an update in case anyone else who may be working with Packer and CIS windows images.

@harikumarks
Copy link

@danielsollondon The solution to use Standard_D2_v2 worked for us. If it is useful for resolving the bug, i would like to point out the things I have noted when it work vs not work. When the winrm connection works, the certificate available in the LocalMachine (imported from keyvault) has a private key associated with it, and when it does not there is no private key associated . So something is breaking while getting/importing this certificate from keyvault. This is not an issue from packer as the certificate secret passed into the ARM template in keyvault has both private key and public key.
Output from a VM that works

PS C:\windows\system32>  $hostname=‘xxxxxxxxx.cloudapp.net'
PS C:\windows\system32>  $cert = (Get-ChildItem cert:\LocalMachine\My | Where-Object { $_.Subject -eq "CN=" + $hostname } | Select-Object -Last 1)
PS C:\windows\system32> echo $cert.Thumbprint
7E1C9BXXXXXXXXXXXXXXXXXXXXXXD988FEB3
PS C:\windows\system32> echo $cert.PrivateKey

PublicOnly           : False
CspKeyContainerInfo  : System.Security.Cryptography.CspKeyContainerInfo
KeySize              : 2048
KeyExchangeAlgorithm : RSA-PKCS1-KeyEx
SignatureAlgorithm   : http://www.w3.org/2000/09/xmldsig#rsa-sha1
PersistKeyInCsp      : True
LegalKeySizes        : {System.Security.Cryptography.KeySizes}



Output from a VM that does not work

PS C:\Users\testadmin> $hostname=‘yyyyyyyy.cloudapp.net'
PS C:\Users\testadmin> $cert = (Get-ChildItem cert:\LocalMachine\My | Where-Object { $_.Subject -eq "CN=" + $hostname } | Select-Object -Last 1)
PS C:\Users\testadmin> echo $cert.Thumbprint
89706YYYYYYYYYyYYYYYYYYYYYYYYYY9FF8DE
PS C:\Users\testadmin> echo $cert.PrivateKey
PS C:\Users\testadmin> !!!NO OUTPUT HERE!!!

@amarkulis
Copy link

Thanks for the reply. My issue was not related to this thread. After further troubleshooting I found that the registry setting 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System\\LocalAccountTokenFilterPolicy’ when enabled (set to 0) prevents winrm from opening a connection.

I wanted to provide an update in case anyone else who may be working with Packer and CIS windows images.
@nfischer2 How did you resolve this? Did you manually update the image and then deploy from that going forward or did you implement a fix in your pipeline?

@pmozbert
Copy link

pmozbert commented Jun 8, 2020

the winrm timeout with windows server 2016 is still happening. Is there an open ticket with Azure for this?

@adamrushuk
Copy link

@pmozbert what VM size are you using?

Since I moved to Standard_D2_v2, I've not had a single WinRM timeout.

@nfischer2
Copy link

@amarkulis We had to move away from Packer and now use Azure custom script extension to configure Windows CIS images for Azure.

@adamrushuk
Copy link

Agreed, the CIS benchmarks can definitely complicate matters.

@pmozbert
Copy link

pmozbert commented Jun 9, 2020

I increase vm size to Standard_DS_v2 and that worked for a while several months ago, along with a 30m timeout, but now the timeouts are back.

@d4md1n
Copy link

d4md1n commented Jun 11, 2020

Having the same level of build flakiness with Standard_B2ms vm size

@shurick81
Copy link

I have built quite a few Windows 2016 images with Standard_D4s_v3 size and it usually works just fine but last 4 times in a row I got this error again.

            "image_publisher": "MicrosoftWindowsServer",
            "image_offer": "WindowsServer",
            "image_sku": "2016-Datacenter-smalldisk",
            "image_version": "latest",
        
            "communicator": "winrm",
            "winrm_use_ssl": "true",
            "winrm_insecure": "true",
            "winrm_timeout": "30m",
            "winrm_username": "packer",
            "temp_compute_name": "swazpkr00",
        
            "location": "WestEurope",
            "vm_size": "Standard_D4s_v3",
            "managed_image_storage_account_type": "Standard_LRS"

@JamesAllen16
Copy link

I am getting this issue as well with Standard_D4s_v3 at the moment. It has troubled our team on and off for months even with a timeout of 30m...

            "os_type": "Windows",
            "image_publisher": "MicrosoftWindowsServer",
            "image_offer": "WindowsServer",
            "image_sku": "2016-Datacenter",
            "os_disk_size_gb": 256,
            "communicator": "winrm",
            "winrm_use_ssl": true,
            "winrm_insecure": true,
            "winrm_timeout": "30m",

Also seeing it with Windows 10 Pro RS4:

            "os_type": "Windows",
            "image_publisher": "MicrosoftWindowsDesktop",
            "image_offer": "Windows-10",
            "image_sku": "rs4-pron",

@sokoloffmaks
Copy link

hi , same issue with one of our client's WVD win 10 multisession deployments
When i deploy the same packer temlate to my msdn test subscription - it goes just fine (
Weird (

        "os_type": "Windows",
        "image_publisher": "MicrosoftWindowsDesktop",
        "image_offer": "office-365",
        "image_sku": "20h1-evd-o365pp",
        "communicator": "winrm",
        "winrm_use_ssl": "true",
        "winrm_insecure": "true",
        "winrm_timeout": "15m",
        "winrm_username": "packer",
        "location": "EastUS",
        "vm_size": "Standard_DS4_v2",
        "async_resourcegroup_delete":false,
        "managed_image_storage_account_type": "Standard_LRS",

@ghost
Copy link

ghost commented Apr 30, 2021

This issue has been automatically migrated to hashicorp/packer-plugin-azure#38 because it looks like an issue with that plugin. If you believe this is not an issue with the plugin, please reply to hashicorp/packer-plugin-azure#38.

@ghost ghost closed this as completed Apr 30, 2021
@zsiebers
Copy link

Hello,

We are seeing the same issue here as well. first it was a 503 error and determined the listner was set to HTTPS even though the winrm insecure parameter was set to true, we manually reset the listener to use http only and are now seeing the following error.

2021/09/10 14:16:55 packer.exe plugin: [DEBUG] connecting to remote shell using WinRM
2021/09/10 14:16:55 packer.exe plugin: [ERROR] connection error: http response error: 401 - invalid content type
2021/09/10 14:16:55 packer.exe plugin: [ERROR] WinRM connection err: http response error: 401 - invalid content type

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug remote-plugin/azure stage/waiting-on-upstream This issue is waiting on an upstream change
Projects
None yet
Development

No branches or pull requests