Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Asset Bundle deploy fails: terraform apply: failed to instantiate provider #1613

Closed
dbph opened this issue Jul 19, 2024 · 11 comments
Closed
Labels
Bug Something isn't working DABs DABs related issues Response Requested

Comments

@dbph
Copy link

dbph commented Jul 19, 2024

Describe the issue

Every "bundle deploy" fails locally on my machine, but works perfectly, when I deploy with a azure-devops deployment pipeline.

Error message is "Error: failed to read schema for [my_job_name] in registry.terraform.io/databricks/databricks: failed to instantiate provider "registry.terraform.io/databricks/databricks" to obtain schema: Unrecognized remote plugin message: This usually means that the plugin is either invalid or simply needs to be recompiled to support the latest protocol."

After the process has failed, If I change the h1 checksum (only add one character) in .terraform.lock.hcl in the .databricks folder of the bundle and run "bundle deploy" for a second time, the deployment finishes successfully.
The previous version of the h1 hash is then readded to the .terraform.lock.hcl file.

However, if I then run a "bundle deploy" again, It fails with the original error message.

Transforming the h1 hash again results in another successfull run... and so on.

Steps to reproduce the behavior

Create a new bundle with
databricks bundle init default-python
then run
databricks bundle deploy
then error.

Expected Behavior

bundle deploy results in "Deployment complete!"

Actual Behavior

bundle deploy results in "Error: failed to read schema for [my_job_name] in registry.terraform.io/databricks/databricks: failed to instantiate provider "registry.terraform.io/databricks/databricks" to obtain schema: Unrecognized remote plugin message: This usually means that the plugin is either invalid or simply needs to be recompiled to support the latest protocol."

OS and CLI version

Windows with Databricks CLI v0.218.1 but I've also tried several other versions.

@dbph dbph added the DABs DABs related issues label Jul 19, 2024
@andrewnester
Copy link
Contributor

Could you try to upgrade to the latest version (0.224.0), remove Terraform cache folder and see if the error persists?

@dbph
Copy link
Author

dbph commented Jul 22, 2024

Hi,
thanks for your help!

I deleted folder C:\Users{my_user}\AppData\Roaming\terraform.d, deleted .databricks folder in the asset bundle folder and downloaded the latest Version of Databricks CLI:

databricks -v
returns Databricks CLI v0.224.0
databricks bundle deploy
unfortunately returns the same error as before.

Now the trick with changing the h1-hash of databricks provider in .terraform.lock.hcl. doesn't work anymore

@pietern pietern added Bug Something isn't working and removed Response Requested labels Jul 23, 2024
@pietern
Copy link
Contributor

pietern commented Jul 23, 2024

@dbph Could you try running the Terraform provider directly as a binary?

If I do this on my machine, I get:

% .databricks/bundle/dev/terraform/.terraform/providers/registry.terraform.io/databricks/databricks/1.48.0/darwin_arm64/terraform-provider-databricks_v1.48.0
Databricks Terraform Provider

Version 1.48.0

https://registry.terraform.io/providers/databricks/databricks/latest/docs

This binary is a plugin. These are not meant to be executed directly.
Please execute the program that consumes these plugins, which will
load any plugins automatically

@dbph
Copy link
Author

dbph commented Jul 23, 2024

Hi, in the meantime, I reinstalled the previous version of the CLI (he one that is currently shipped in the vscode extension), therefore when executing the databricks terraform provider exe I get:

Databricks Terraform Provider Version 1.40.0 https://registry.terraform.io/providers/databricks/databricks/latest/docs This binary is a plugin. These are not meant to be executed directly. Please execute the program that consumes these plugins, which will load any plugins automatically

@dbph
Copy link
Author

dbph commented Jul 29, 2024

Hi,

anything elese, I could try?

Thanks :-)

@andrewnester
Copy link
Contributor

It looks like something might have been cached somewhere and also your TF provider seems to be quite old hence the error. You sure you don't have any local installation of TF provider somewhere else on your machine?

@dbph
Copy link
Author

dbph commented Jul 29, 2024

I checked again. There is no local installation of the TF provider on my machine.
The old TF provider was because I used the Databricks CLI shipped with the Databricks VS-Code Addin. I've tried again with Databricks CLI v0.224.1 resulting in TF Provider 1.49.1.

I still get the same error. If I delete terraform-provider-databricks_v1.49.1.exe in .databricks\bundle\dev\terraform.terraform\providers\registry.terraform.io\databricks\databricks\1.49.1\windows_amd64 in the bundle folder I want to deploy then the deployment finishes successfully.
If I try it again - the same error. If I delete the file again then again it works once.

@andrewnester
Copy link
Contributor

@dbph can you give it a try with the latest 0.229.0 version of CLI?

@akshay-ramanujam-pfj
Copy link

@andrewnester ,I am getting a similar error, tried using Databricks CLI v0.230.0. Any update ?

Error: exit status 1
Error: failed to read schema for databricks_job.test_DAB_run_job in registry.terraform.io/databricks/databricks: failed to instantiate provider "registry.terraform.io/databricks/databricks" to obtain schema: Unrecognized remote plugin message:

This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.

@andrewnester
Copy link
Contributor

@akshay-ramanujam-pfj this means something might have got corrupted in the state underneath, do you made any manual changes to the files in .databricks directory?

@akshay-ramanujam-pfj
Copy link

@andrewnester , i think its some permissions error in my system. When i tried running running the provider binary the process was getting killed. I had to use sudo to fix that. When i deployed with sudo, it was able to deploy it. Thanks

@andrewnester andrewnester closed this as not planned Won't fix, can't repro, duplicate, stale Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working DABs DABs related issues Response Requested
Projects
None yet
Development

No branches or pull requests

4 participants