05-02-2024 02:08 AM - edited 05-02-2024 02:11 AM
Good morning,
I'm trying to run:
databricks bundle run --debug -t dev integration_tests_job
My bundle looks:
bundle:
name: x
include:
- ./resources/*.yml
targets:
dev:
mode: development
default: true
workspace:
host: x
run_as:
user_name: x
prod:
mode: production
workspace:
host: x
run_as:
user_name: x
resources:
jobs:
integration_tests_job:
name: integration_tests_job
email_notifications:
on_failure:
- x
tasks:
- task_key: notebook_task
job_cluster_key: job_cluster
notebook_task:
notebook_path: ../tests/integration/main.py
job_clusters:
- job_cluster_key: job_cluster
existing_cluster_id: x
And I'm getting this error:
10:56:28 ERROR Error: no deployment state. Did you forget to run 'databricks bundle deploy'? pid=265687 mutator=seq mutator=terraform.Load
10:56:28 ERROR Error: no deployment state. Did you forget to run 'databricks bundle deploy'? pid=265687 mutator=seq
Error: no deployment state. Did you forget to run 'databricks bundle deploy'?
10:56:28 ERROR failed execution pid=265687 exit_code=1 error="no deployment state. Did you forget to run 'databricks bundle deploy'?"
I observe that the deployment seems to be carried out correctly, creating /Workspace/Users/x/.bundle/x/dev/state/terraform.tfstate
{
"version": 4,
"terraform_version": "1.5.5",
"serial": 1,
"lineage": "x",
"outputs": {},
"resources": [],
"check_results": null
}
Could you help me with the error?
Jordi
06-20-2024 08:57 AM
I wanted to share that I came across this issue as well. I was receiving an empty terraform.tfstate file after deploying to a workspace, and was not able to "run". After noticing the "resources": [] was empty in the terraform file (like your code), I thought maybe the resources file could not be found; I was right.
With this line of code in the databricks.yaml file, it was referencing ".yml" files, but my resources file had a ".yaml" file extension instead:
include: - ./resources/*.yml
In the resources folder: "setup_jobs.yaml"
I changed the file extension, and got the deployment terraform code to find the file, and now deploys/runs properly.
I hope that helps.
06-20-2024 09:24 AM
Good afternoon @Aaron12
Thank you very much for your response. I will test your solution and then close the post. Since I didn't receive a response, I executed the jobs using the Databricks Jobs CLI. Once these jobs are executed, I delete them because I used them for my CI/CD. Do you know if with Databricks asset bundles you can execute jobs and then delete these jobs?
05-03-2024 03:56 AM
Hi @Retired_mod , I'm going to review what you're saying in detail. Anyway, let me comment in case you have any ideas. I'm going point by point.
I have executed the deploy command and the files are deployed in the correct path. I also see the Terraform state file locally and in the path I mentioned earlier (I have also put the content of the file). I see the folders of artifacts, files, and states in the Databricks path.
I'm going to review this thoroughly. I have the Databricks CLI authenticated, and the deploy is done correctly. What I think I don't have configured are the environment variables (The host is retrieved from the DAB .yaml file in the workspace, if I'm not mistaken, right?).
My configuration seems correct, but I'll review it thoroughly just in case.
The status is what I mentioned before, apparently everything is correct.
I also need to check this because my version of Databricks is 12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12). Where can I find which version of Databricks is compatible with version 1.5.5.?
Thank you very much for the information. Let's see if I can manage to execute the run command and make some changes recommended by the URL.
05-06-2024 01:31 AM - edited 05-06-2024 01:33 AM
Hello @Retired_mod, I have been running tests and I can't get it to work. The problem arises when running the integration test job. As a temporary measure to avoid getting stuck, I've used the Databricks CLI (jobs) to create and execute the job, and it works correctly. All these tests are being conducted from my local machine with user authentication. Once it's working, I'll use a service principal from Azure DevOps pipelines.
I want to inform you that I have authenticated the CLI as follows, and I have configured the environment variables in this way:
; The profile defined in the DEFAULT section is to be used as a fallback when no profile is explicitly specified.
[DEFAULT]
[adb-x]
host = x
auth_type = databricks-cli
export DATABRICKS_HOST="x"
export DATABRICKS_TOKEN="x"
I have conducted tests by modifying fields in the YAML, for instance, adding:
permissions:
- level: CAN_RUN
user_name: x
run_as:
- user_name: x
I can't seem to make it work. The deployment is successful, do you have any idea what might be affecting the 'run' command to return 'ERROR Error: no deployment state. Did you forget to run 'databricks bundle deploy'? pid=265687 mutator=seq mutator=terraform.Load'?
Jordi
06-20-2024 08:57 AM
I wanted to share that I came across this issue as well. I was receiving an empty terraform.tfstate file after deploying to a workspace, and was not able to "run". After noticing the "resources": [] was empty in the terraform file (like your code), I thought maybe the resources file could not be found; I was right.
With this line of code in the databricks.yaml file, it was referencing ".yml" files, but my resources file had a ".yaml" file extension instead:
include: - ./resources/*.yml
In the resources folder: "setup_jobs.yaml"
I changed the file extension, and got the deployment terraform code to find the file, and now deploys/runs properly.
I hope that helps.
06-20-2024 09:24 AM
Good afternoon @Aaron12
Thank you very much for your response. I will test your solution and then close the post. Since I didn't receive a response, I executed the jobs using the Databricks Jobs CLI. Once these jobs are executed, I delete them because I used them for my CI/CD. Do you know if with Databricks asset bundles you can execute jobs and then delete these jobs?
06-20-2024 09:37 AM
Ahh interesting... so sounds like you gave up on DAB's because of this issue. Sorry to hear.
Yes, you can use the command "databricks bundle destroy --target <workspace_name>" to delete all instances of the dab, including the jobs declared in workflows.
2 weeks ago
Hello,
Reopening this ticket in hopes that either of you had some luck in resolving your bug. I am currently facing the same issue where I can deploy an asset bundle via the local CLI without issue (by deploy I mean the bundle code is written to my workspace and the Workflow is created). However, when I try to deploy my bundle via an Azure DevOps pipeline it only uploads the bundle resource code but no workflow is created.
The interesting similarity to @jorperort reported issue is that no deployment.json file is created in the case where the Workflow isn't created (aka the situation in which I use an Azure DevOps pipeline). I'm not sure why I can deploy the bundle & associated workflow via local CLI when I am authenticated with a PAT but when I use that same PAT in the Azure DevOps pipeline I am unable to deploy the actual workflow (but still in this case the code is written to the .bundle folder I specify).
Any suggestions would be greatly appreciated.
2 weeks ago - last edited 2 weeks ago
Hello @jtberman,
I wouldn't be able to say what the issue might be, but maybe I could if you provide a bit more information. For example, part of the bundle files being deployed and what executes the workflow.That might help to resolve the problem. In the case of the Azure DevOps pipeline, you could print the response to the run command.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group