Hi @toast_2001,
I did some digging and have a few helpful tips/tricks to assist your troubleshooting. So let me walk through what's likely happening and what to actually do about it.
The error tells you that on the second deployment, DAB is trying to look up the existing demo.landing schema (because it thinks it can skip it as unchanged), but Unity Catalog is returning a 404 — the schema isn't there when DAB goes to check it. Something is dropping it between runs, or DAB is looking in the wrong place.
Here's where I'd start:
- Confirm the schema actually persists between runs. Right after a successful deploy and again just before the next one, run:
DESCRIBE SCHEMA demo.landing;
If it's gone before the second deploy, something outside the bundle — a notebook, a job, a manual step — is dropping it. That's your real problem.
-
Stop managing the schema in the bundle (easiest fix if the schema is long-lived). If demo.landing is basically a stable container for external ingestion, you don't need DAB to own its lifecycle. Instead:
- Create
demo.landing once, manually or via Terraform.
- Remove the
resources.schemas.landing block from your bundle.
- In your volume definitions, reference the literal names (
demo / landing) instead of the schema resource.
DAB will then manage only the volumes and assume the catalog + schema already exist. That sidesteps the "skip schema, schema not found" path entirely.
-
Verify you're targeting the same workspace and metastore both times. SCHEMA_DOES_NOT_EXIST can also surface when the bundle is pointed at a different workspace or metastore on the second run — different profile, different target, different URL. Worth double-checking.
-
Keep skip_name_prefix_for_schema off. You already tried removing it, which is the right call. That flag is experimental and can affect how resource IDs are computed. Don't bring it back in anything resembling production until you have a stable pattern.
-
If DAB really needs to own the schema. If demo.landing has to be created and destroyed by this bundle — ephemeral environments, etc. — this may be hitting a current limitation in how DAB refreshes schema state in direct mode when external volumes are involved. If that's the case, open a Databricks Support ticket with:
- Your bundle YAML (redacted as needed)
- Workspace URL and metastore name
- Timestamps of the first successful deploy and the failing redeploy
- Output from
SHOW SCHEMAS IN catalog demo before and after
The most reliable near-term path is option 2 — treat the schema as pre-provisioned infrastructure and let the bundle manage only what lives inside it.
Hope that helps narrow it down.
Cheers, Lou