- 32 Views
- 1 replies
- 0 kudos
Hi,I am using Databricks CLI 0.227.1 for creating a bundle project to deploy job.As per this , https://learn.microsoft.com/en-us/azure/databricks/dev-tools/bundles/variables I wanted to have variable-overrides.json to have my variables.I created a js...
- 32 Views
- 1 replies
- 0 kudos
Latest Reply
Hi there @Venu,You have to specify the names of those variables in databricks.yml as wellvariables:
task_key:
metadata_schema:then you can reference them later in your jobs definition : ${var.task_key} or ${var.job_cluster_key} the way you did.and ...
by
797646
• New Contributor II
- 58 Views
- 2 replies
- 0 kudos
Queries with big result are executed on cluster. If we specify calculated measure as something like cal1 ascount(*) / count(distinct field1) it will wrap it in backticks as `count(*) / count(distinct field1) ` as `cal1`functions are not identified in...
- 58 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Brahmareddy This didn't workcount(*) * 1.0 / count(distinct field1) AS cal1)gave me same error. But as per this feature release https://docs.databricks.com/aws/en/dashboards/datasets/calculated-measuresthis should work out of box, otherwise it's...
1 More Replies
- 54 Views
- 2 replies
- 0 kudos
- 54 Views
- 2 replies
- 0 kudos
Latest Reply
Hey @BobCat62 , This might helpdlt will be in direct publishingmode by default. If you select hive_metstore you must specify the default schema in the dlt pipeline setting. If not done there. At the time of defining the dlt table pass the schema_name...
1 More Replies
- 26 Views
- 2 replies
- 0 kudos
The Databricks assistant tells me (sometimes) that `CREATE TEMP TABLE` is a valid SQL operation. And other sources (e.g., https://www.freecodecamp.org/news/sql-temp-table-how-to-create-a-temporary-sql-table/) say the same.But in actual practice, thi...
- 26 Views
- 2 replies
- 0 kudos
Latest Reply
You can create temp tables in dlt pipelines as wellsimply@Dlt.table(name ="temp_table", temporary = True)def temp_table():return <any_query>
1 More Replies
- 60 Views
- 2 replies
- 0 kudos
Hello Team,I have removed definition of table from delta live table pipeline but table is still present in unity catalog. In event log, it is giving below messageMaterialized View '`catalog1`.`schema1`.`table1`' is no longer defined in the pipeline a...
- 60 Views
- 2 replies
- 0 kudos
Latest Reply
Hi @Anish_2 How are you doing today? I agree with @KaranamS's answer.Databricks marks the table as inactive instead of removing it to prevent accidental data loss, allowing you to restore it if needed. Once inactive, the table remains in Unity Catalo...
1 More Replies
by
MrFi
• New Contributor
- 118 Views
- 1 replies
- 0 kudos
We are encountering an issue with volumes created inside Unity Catalog. We are using AWS and Terraform to host Databricks, and our Unity Catalog structure is as follows:• Catalog: catalog_name• Schemas: raw, bronze, silver, gold (all with external l...
- 118 Views
- 1 replies
- 0 kudos
Latest Reply
Hi @MrFi How are you doing today?As per my understanding, It looks like the Unity Catalog UI might have trouble handling external volumes, even though dbutils works fine. Try running SHOW VOLUMES IN catalog_name.raw; to check if the volume is properl...
- 538 Views
- 8 replies
- 0 kudos
We get the following error with some basic views and not others when using serverless compute (from a notebook or from SQL Editor or from the Catalog Explorer). Views are simple select * from table x and underlying schemas/tables are using managed m...
- 538 Views
- 8 replies
- 0 kudos
Latest Reply
@ceceliac just a quick check, if you rerun the same query after it has initially failed, will it go through or still fail? if it runs fine, wait another 10-15mins and rerun it and share the outcome. So:
1.- Run it once, it will fail.
2.- Rerun it inm...
7 More Replies
- 104 Views
- 1 replies
- 0 kudos
Having a delta table with the history of 15 versions (see screenshot). After running the command:RESTORE TABLE hive_metastore.my_schema.my_table TO VERSION AS OF 6;And then running DESCRIBE HISTORY (see screenshot) it seems that a new version (RESTOR...
- 104 Views
- 1 replies
- 0 kudos
Latest Reply
it's not. I haven't observed this behavior. According to the delta lake documentation "Using the restore command resets the table’s content to an earlier version, but doesn’t remove any data. It simply updates the transaction log to indicate that cer...
- 2011 Views
- 2 replies
- 1 kudos
Hi,Does anyone know how to link Aurora to Databricks directly and load data into Databricks automatically on a schedule without any third-party tools in the middle?
- 2011 Views
- 2 replies
- 1 kudos
Latest Reply
AWS Aurora supports PostgreSQL or MySQL, did you try to connect using JDBC?url = f"jdbc:postgresql://{database_host}:{database_port}/{database_name}"remote_table = (spark.read.format("jdbc").option("driver", driver).option("url", url).option("dbtable...
1 More Replies
- 53 Views
- 2 replies
- 1 kudos
Hi Guys,For running the job with varying workload what should I use ? Serverless cluster or Job compute ?What are positives and negatives?(I'll be running my notebook from Azure data factory)
- 53 Views
- 2 replies
- 1 kudos
Latest Reply
It depends on cost, performance and startup time needed for your use-case.Serverless compute is usually preferred choice because of its fast startup time and dynamic scaling. However, if your workload is long-running and predictable, job compute with...
1 More Replies
- 12003 Views
- 4 replies
- 4 kudos
We have provisioned a new workspace in Azure using our own VNet. Upon creating the first cluster, I encounter this error:Control Plane Request Failure:
Failed to get instance bootstrap steps from the Databricks Control Plane. Please check that instan...
- 12003 Views
- 4 replies
- 4 kudos
by
Phani1
• Valued Contributor II
- 44 Views
- 1 replies
- 0 kudos
Hi Team,We've noticed that for some use cases, customers are proposing a architecture with A) Fabric in the Gold layer and reporting in Azure Power BI, while using Databricks for the Bronze and Silver layers. However, we can also have the B) Gold lay...
- 44 Views
- 1 replies
- 0 kudos
Latest Reply
Gold layer in Databricks and connect to Power BI - this is a good option.However, If you need to use some of Fabric capabilities, because your team has preferences to use T-SQL, Direct Lake, Python notebooks, low-code tools like Data Factory. MS Fabr...
by
dzsuzs
• New Contributor II
- 1380 Views
- 3 replies
- 2 kudos
I have a stateless streaming application that uses foreachBatch. This function executes between 10-400 times each hour based on custom logic. The logic within foreachBatch includes: collect() on very small DataFrames (a few megabytes) --> driver mem...
- 1380 Views
- 3 replies
- 2 kudos
Latest Reply
Did you ever figure out what is causing the memory leak? We are experiencing a nearly identical issue where the memory gradually increases over time and OOM after a few days. I did track down this open bug ticket that states there is a memory leak ...
2 More Replies
- 75 Views
- 3 replies
- 1 kudos
Hi Everyone,Trying to read JSON files with autoloader is failing to infer the schema correctly, every nested or struct column is being inferred as a string. spark.readStream.format("cloudFiles")
.option("cloudFiles.format", "json")
.option("cloud...
- 75 Views
- 3 replies
- 1 kudos
Latest Reply
Hi @robertomatus ,You're right—it would be much better if we didn’t have to rely on workarounds. The reason AutoLoader infers schema differently from spark.read.json() is that it's optimized for streaming large-scale data efficiently. Unlike spark.re...
2 More Replies
by
N38
• New Contributor II
- 481 Views
- 10 replies
- 4 kudos
I am trying the below queries using both SQL warehouse and a shared cluster on Databricks runtime (15.4/16.1) with Unity Catalog: SELECT * FROM event_log(table(my_catalog.myschema.bronze_employees))SELECT * FROM event_log("6b317553-5c5a-40d5-9541-1a5...
- 481 Views
- 10 replies
- 4 kudos