Skip to content
This repository has been archived by the owner on Feb 6, 2023. It is now read-only.

requests.exceptions.HTTPError: 400 Client Error: Bad Request for url #55

Closed
di-lin-mckinsey opened this issue Aug 3, 2022 · 7 comments

Comments

@di-lin-mckinsey
Copy link

di-lin-mckinsey commented Aug 3, 2022

Hi,

After successfulling completing the deploy step, I got the following error when executing dbx launch --job=PROD-telco-churn-initial-model-train-register --environment=prod --as-run-submit --trace.

I'm quite sure I have set the access token for my prod environment. Any idea why the connection might fail?

Thank you.

[dbx][2022-08-03 10:03:04.737] Deployment for environment prod finished successfully ✨
[dbx][2022-08-03 10:03:07.213] Launching job PROD-telco-churn-initial-model-train-register on environment prod
[dbx][2022-08-03 10:03:07.216] Using profile provided from the project file
[dbx][2022-08-03 10:03:07.216] Found auth config from provider ProfileEnvConfigProvider, verifying it
[dbx][2022-08-03 10:03:07.216] Found auth config from provider ProfileEnvConfigProvider, verification successful
[dbx][2022-08-03 10:03:07.216] Profile prd will be used for deployment
[dbx][2022-08-03 10:03:09.464] No additional tags provided
[dbx][2022-08-03 10:03:09.467] Successfully found deployment per given job name
[dbx][2022-08-03 10:03:10.763] Launching job via run submit API
Traceback (most recent call last):
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/databricks_cli/sdk/api_client.py", line 138, in perform_query
    resp.raise_for_status()
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/requests/models.py", line 943, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://adb-5150270794210211.11.azuredatabricks.net/api/2.0/jobs/runs/submit

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/di_lin/opt/anaconda3/envs/erebus/bin/dbx", line 8, in <module>
    sys.exit(cli())
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/dbx/commands/launch.py", line 173, in launch
    run_data, job_id = run_launcher.launch()
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/dbx/commands/launch.py", line 331, in launch
    run_data = _submit_run(self.api_client, job_spec)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/dbx/commands/launch.py", line 400, in _submit_run
    return api_client.perform_query("POST", "/jobs/runs/submit", data=payload)
  File "/Users/di_lin/opt/anaconda3/envs/erebus/lib/python3.9/site-packages/databricks_cli/sdk/api_client.py", line 146, in perform_query
    raise requests.exceptions.HTTPError(message, response=e.response)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://adb-xxxxxxxxxxxxxxxx.11.azuredatabricks.net/api/2.0/jobs/runs/submit
 Response from server: 
 { 'error_code': 'INVALID_PARAMETER_VALUE',
  'message': 'Node type i3.xlarge is not supported. Supported node types: '
             'Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, '
             'Standard_D4s_v3, Standard_D8s_v3, Standard_D16s_v3, '
             'Standard_D32s_v3, Standard_D64s_v3, Standard_D4a_v4, '
             'Standard_D8a_v4, Standard_D16a_v4, Standard_D32a_v4, '
             'Standard_D48a_v4, Standard_D64a_v4, Standard_D96a_v4, '
             'Standard_D8as_v4, Standard_D16as_v4, Standard_D32as_v4, '
             'Standard_D48as_v4, Standard_D64as_v4, Standard_D96as_v4, '
             'Standard_D4ds_v4, Standard_D8ds_v4, Standard_D16ds_v4, '
             'Standard_D32ds_v4, Standard_D48ds_v4, Standard_D64ds_v4, '
             'Standard_D3_v2, Standard_D4_v2, Standard_D5_v2, Standard_D8_v3, '
             'Standard_D16_v3, Standard_D32_v3, Standard_D64_v3, '
             'Standard_D4d_v4, Standard_D8d_v4, Standard_D16d_v4, '
             'Standard_D32d_v4, Standard_D48d_v4, Standard_D64d_v4, '
             'Standard_D12_v2, Standard_D13_v2, Standard_D14_v2, '
             'Standard_D15_v2, Standard_DS12_v2, Standard_DS13_v2, '
             'Standard_DS14_v2, Standard_DS15_v2, Standard_E8_v3, '
             'Standard_E16_v3, Standard_E32_v3, Standard_E64_v3, '
             'Standard_E8s_v3, Standard_E16s_v3, Standard_E32s_v3, '
             'Standard_E64s_v3, Standard_E4d_v4, Standard_E8d_v4, '
             'Standard_E16d_v4, Standard_E20d_v4, Standard_E32d_v4, '
             'Standard_E48d_v4, Standard_E64d_v4, Standard_E4ds_v4, '
             'Standard_E8ds_v4, Standard_E16ds_v4, Standard_E20ds_v4, '
             'Standard_E32ds_v4, Standard_E48ds_v4, Standard_E64ds_v4, '
             'Standard_E80ids_v4, Standard_E4a_v4, Standard_E8a_v4, '
             'Standard_E16a_v4, Standard_E20a_v4, Standard_E32a_v4, '
             'Standard_E48a_v4, Standard_E64a_v4, Standard_E96a_v4, '
             'Standard_E4as_v4, Standard_E8as_v4, Standard_E16as_v4, '
             'Standard_E20as_v4, Standard_E32as_v4, Standard_E48as_v4, '
             'Standard_E64as_v4, Standard_E96as_v4, Standard_E4s_v4, '
             'Standard_E8s_v4, Standard_E16s_v4, Standard_E20s_v4, '
             'Standard_E32s_v4, Standard_E48s_v4, Standard_E64s_v4, '
             'Standard_E80is_v4, Standard_L4s, Standard_L8s, Standard_L16s, '
             'Standard_L32s, Standard_F4, Standard_F8, Standard_F16, '
             'Standard_F4s, Standard_F8s, Standard_F16s, Standard_H8, '
             'Standard_H16, Standard_F4s_v2, Standard_F8s_v2, '
             'Standard_F16s_v2, Standard_F32s_v2, Standard_F64s_v2, '
             'Standard_F72s_v2, Standard_NC12, Standard_NC24, '
             'Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, '
             'Standard_NC4as_T4_v3, Standard_NC8as_T4_v3, '
             'Standard_NC16as_T4_v3, Standard_NC64as_T4_v3, Standard_L8s_v2, '
             'Standard_L16s_v2, Standard_L32s_v2, Standard_L64s_v2, '
             'Standard_L80s_v2, Standard_D4s_v5, Standard_D8s_v5, '
             'Standard_D16s_v5, Standard_D32s_v5, Standard_D48s_v5, '
             'Standard_D64s_v5, Standard_D96s_v5, Standard_D4ds_v5, '
             'Standard_D8ds_v5, Standard_D16ds_v5, Standard_D32ds_v5, '
             'Standard_D48ds_v5, Standard_D64ds_v5, Standard_D96ds_v5, '
             'Standard_E4s_v5, Standard_E8s_v5, Standard_E16s_v5, '
             'Standard_E20s_v5, Standard_E32s_v5, Standard_E48s_v5, '
             'Standard_E64s_v5, Standard_E96s_v5, Standard_E4ds_v5, '
             'Standard_E8ds_v5, Standard_E16ds_v5, Standard_E20ds_v5, '
             'Standard_E32ds_v5, Standard_E48ds_v5, Standard_E64ds_v5, '
             'Standard_E96ds_v5'}
@niall-turbitt
Copy link
Owner

It looks like you're running on an Azure workspace, but supplying an AWS instance type.

In deployment.yml we define node_type_id and driver_node_type_id , using i3.xlarge instance types. These are AWS-specific. Ultimately, the choice of instance type is up to the user, but the nearest equivalent on Azure would be Standard_DS3_v2. I would suggest replacing i3.xlarge with Standard_DS3_v2 and give it a go.

@di-lin-mckinsey
Copy link
Author

di-lin-mckinsey commented Aug 3, 2022

Thanks @niall-turbitt . Setting the node_type_id and driver_node_type_id to Standard_DS3_v2 solves it. Now the launch command can be executed. The speed for running this job seems quite slow though. Would using a larger instance likely to help? Like Standard_DS4_v2 or Standard_DS5_v2?

@niall-turbitt
Copy link
Owner

The time to execute the actual job should not take more than 2-5 minutes, however it can potentially take an equivalent time to acquire the VMs for the cluster. Note the comment just under where you specify the node and worker types.

If you have the permissions to do so I would set up an instance pool and supply the pool ID to the commented out driver_instance_pool_id and instance_pool_id attributes. This means that for each job you can acquire resources from that warm pool of instances. Note that you will then have to comment out node_type_id and driver_node_type_id.

Your deployment.yml would look something then like the following:
Screenshot 2022-08-03 at 10 02 12

@di-lin-mckinsey
Copy link
Author

Thanks. Just to double check, what should be the compatible preloaded Databricks Runtime version? Should it be 11.0 ML (Scala 2.12, Spark 3.3.0) or 11.0 (Scala 2.12, Spark 3.3.0) ?

@niall-turbitt
Copy link
Owner

You should select DBR 11.0 ML (Scala 2.12, Spark 3.3.0)

@di-lin-mckinsey
Copy link
Author

Thanks! I have created the pool now. Do I also need to add a starter job as the video explained? However, the interface is quite different. I couldn't find a place to specify the use of pool here.
image

Is there any sample code I could copy over for the starter job?

Thank you.

@niall-turbitt
Copy link
Owner

Creating a starter job is not required. The first job you submit against the pool may take a few mins to acquire those VMs for the pool. But then that pool will remain warm to the time period you have set. Subsequent jobs which use this pool will then take a shorter time to acquire resources as they will acquire VMs from the now warm pool.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants