Every time I try to create a cluster over the API I get this. Even when I am literally using the example JSON from the API documentation:
```
{
"cluster_name": "single-node-cluster",
"node_type_id": "i3.xlarge",
"spark_version": "7.6.x-scala2.12",
"num_workers": 0,
"custom_tags": {
"ResourceClass": "SingleNode"
},
"spark_conf": {
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "[*, 4]"
}
}
```
I thought maybe that example spark version was too old, so I set it to 13.0 and that made no difference.
This is being submitted using the Python Databricks-CLI class ApiClient.
Full error text:
```
HTTPError: 400 Client Error: Bad Request for url: https://mothership-production.cloud.databricks.com/api/2.0/clusters/create
Response from server:
{ 'details': [ { '@type': 'type.googleapis.com/google.rpc.ErrorInfo',
'domain': '',
'reason': 'CM_API_ERROR_SOURCE_CALLER_ERROR'}],
'error_code': 'INVALID_PARAMETER_VALUE',
'message': "Missing required field 'spark_version'"}
```
In the python debugger, it is using the method `databricks_cli/sdk/api_client.py(174)perform_query()`.
If I have the debugger spit out theparameters for that "query", I get:
data = {'num_workers': '{"cluster_name": "single-node-cluster", "node_type_id": "i3.xlarge", "spark_version": "7.6.x-scala2.12", "num_workers": 0, "custom_tags": {"ResourceClass": "SingleNode"}, "spark_conf": {"spark.databricks.cluster.profile": "singleNode", "spark.master": "[*, 4]"}}'}
path = '/clusters/create'
headers = {'Authorization': 'Bearer [REDACTED]', 'Content-Type': 'text/json', 'user-agent': 'databricks-cli-0.17.7-'}
method = 'POST'