I have configured a job using `databricks.yml`
```
resources:
jobs:
my_job:
name: my_job
tasks:
- task_key: create_feature_tables
job_cluster_key: my_job_cluster
spark_python_task:
python_file: ../src/create_feature_tables.py
- task_key: evaluate_model
job_cluster_key: my_job_cluster
spark_python_task:
python_file: ../src/evaluate_model.py
job_clusters:
- job_cluster_key: my_job_cluster
new_cluster:
policy_id: ${var.job_cluster_policy}
spark_version: 15.2.x-cpu-ml-scala2.12
node_type_id: i3.xlarge
aws_attributes:
first_on_demand: 1
autoscale:
min_workers: 1
max_workers: 12
```
For some reason, the job runs on shared compute, not my_job_compute as configured in the yml file:
How can I coax Databricks to use my_job_cluster, not shared compute?